We have noticed a rather strange issue related to AMD and vertex colour blending, For some reason when a vertex is rendered on AMD GPU's for us seem to be unable to blend the colours for the vertices.
void gradient_rect(vec3i pos, vec3i size, colour top_left, colour top_right, colour bottom_left, colour bottom_right, float rounding = 0.f) override{
c_vertex verts[4] = {
{vec4(pos.x, pos.y, 0.f, 0.f), top_left.tohex()},
{vec4(pos.x + size.x, pos.y, 0.f, 0.f), top_right.tohex()},
{vec4(pos.x, pos.y + size.y, 0.f, 0.f), bottom_left.tohex()},
{vec4(pos.x + size.x, pos.y + size.y, 0.f, 0.f), bottom_right.tohex()},
};
for(u32 i = 0; i < 4; i++)
add_vert(verts[i], D3DPT_TRIANGLESTRIP);
flush_to_gpu(current_primitive_type);
}
Our system works by creating a vertex buffer shared between the GPU and mapped to the client, we push vertices into this buffer and then flush them off to the GPU.
We assumed it was something related to our batching system, so we ended up just calling DrawPrimitiveUP manually and had the exact same result, It seems that some specific RenderState flag could be responsible for this behavior, and we just don't have the expertise or knowledge to tell exactly why this behavior is occurring.
We had a tester dump his entire RenderState so we could compare it between our development machines RenderStates, and we noticed no real differences that could cause any issues.
To prove that we know this is related specifically to AMD GPU's, we had two other people confirm that this was happening exclusively with AMD GPU's, we also had one of these people swap there AMD GPU out with a NVIDIA GPU And the problem went away.
This is what a correctly blended vertex should look like
This is what a incorrectly blended vertex looks like on AMD GPU's
We have considerable reason to suspect that the root of this problem is related to a RenderState that is incorrectly set to work with AMD GPU's
For anyone that is wondering, here is our RenderState that is set before rendering is started:
device->SetPixelShader(nullptr);
device->SetVertexShader(nullptr);
device->SetTexture(0, nullptr);
device->SetFVF(D3DFVF_XYZRHW | D3DFVF_DIFFUSE);
device->SetRenderState(D3DRS_MULTISAMPLEANTIALIAS, false);
device->SetRenderState(D3DRS_SRGBWRITEENABLE, false);
device->SetRenderState(D3DRS_CULLMODE, D3DCULL_NONE);
device->SetRenderState(D3DRS_LIGHTING, false);
device->SetRenderState(D3DRS_SRCBLEND, D3DBLEND_SRCALPHA);
device->SetRenderState(D3DRS_ALPHABLENDENABLE, true );
device->SetRenderState(D3DRS_SCISSORTESTENABLE, true );
// colour related
device->SetRenderState(D3DRS_COLORWRITEENABLE, 0xFFFFFFFF);
device->SetTextureStageState(0, D3DTSS_COLOROP, D3DTOP_MODULATE);
device->SetTextureStageState(0, D3DTSS_COLORARG1, D3DTA_TEXTURE);
device->SetTextureStageState(0, D3DTSS_COLORARG2, D3DTA_DIFFUSE);
device->SetTextureStageState(0, D3DTSS_ALPHAOP, D3DTOP_MODULATE);
device->SetTextureStageState(0, D3DTSS_ALPHAARG1, D3DTA_TEXTURE);
device->SetTextureStageState(0, D3DTSS_ALPHAARG2, D3DTA_DIFFUSE);
device->SetStreamSource(0, vertex_buffer, 0, sizeof(c_vertex));
Any help would be greatly appreciated.
Cheers.
Related
I'm working on drawing individual pixels to a UIView to create fractal images. My problem is my rendering speed. I am currently running this loop 260,000 times, but would like to render even more pixels. As it is, it takes about 5 seconds to run on my iPad Mini.
I was using a UIBezierPath before, but that was even a bit slower (about 7 seconds). I've been looking in NSBitMap stuff, but I'm not exactly sure if that would speed it up or how to implement it in the first place.
I was also thinking about trying to store the pixels from my loop into an array, and then draw them all together after my loop. Again though, I am not quite sure what the best process would be to store and then retrieve pixels into and from an array.
Any help on speeding up this process would be great.
for (int i = 0; i < 260000; i++) {
float RN = drand48();
for (int i = 1; i < numBuckets; i++) {
if (RN < bucket[i]) {
col = i;
CGContextSetFillColor(context, CGColorGetComponents([UIColor colorWithRed:(colorSelector[i][0]) green:(colorSelector[i][1]) blue:(colorSelector[i][2]) alpha:(1)].CGColor));
break;
}
}
xT = myTextFieldArray[1][1][col]*x1 + myTextFieldArray[1][2][col]*y1 + myTextFieldArray[1][5][col];
yT = myTextFieldArray[1][3][col]*x1 + myTextFieldArray[1][4][col]*y1 + myTextFieldArray[1][6][col];
x1 = xT;
y1 = yT;
if (i > 10000) {
CGContextFillRect(context, CGRectMake(xOrigin+(xT-xMin)*sizeScalar,yOrigin-(yT-yMin)*sizeScalar,.5,.5));
}
else if (i < 10000) {
if (x1 < xMin) {
xMin = x1;
}
else if (x1 > xMax) {
xMax = x1;
}
if (y1 < yMin) {
yMin = y1;
}
else if (y1 > yMax) {
yMax = y1;
}
}
else if (i == 10000) {
if (xMax - xMin > yMax - yMin) {
sizeScalar = 960/(xMax - xMin);
yOrigin=1000-(1000-sizeScalar*(yMax-yMin))/2;
}
else {
sizeScalar = 960/(yMax - yMin);
xOrigin=(1000-sizeScalar*(xMax-xMin))/2;
}
}
}
Edit
I created a multidimensional array to store UIColors into, so I could use a bitmap to draw my image. It is significantly faster, but my colors are not working appropriately now.
Here is where I am storing my UIColors into the array:
int xPixel = xOrigin+(xT-xMin)*sizeScalar;
int yPixel = yOrigin-(yT-yMin)*sizeScalar;
pixelArray[1000-yPixel][xPixel] = customColors[col];
Here is my drawing stuff:
CGDataProviderRef provider = CGDataProviderCreateWithData(nil, pixelArray, 1000000, nil);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGImageRef image = CGImageCreate(1000,
1000,
8,
32,
4000,
colorSpace,
kCGBitmapByteOrder32Big | kCGImageAlphaNoneSkipLast,
provider,
nil, //No decode
NO, //No interpolation
kCGRenderingIntentDefault); // Default rendering
CGContextDrawImage(context, self.bounds, image);
Not only are the colors not what they are supposed to be, but every time I render my image, to colors are completely different from the previous time. I have been testing different stuff with the colors, but I still have no idea why the colors are wrong, and I'm even more confused how they keep changing.
Per-pixel drawing — with complicated calculations for each pixel, like fractal rendering — is one of the hardest things you can ask a computer to do. Each of the other answers here touches on one aspect of its difficulty, but that's not quite all. (Luckily, this kind of rendering is also something that modern hardware is optimized for, if you know what to ask it for. I'll get to that.)
Both #jcaron and #JustinMeiners note that vector drawing operations (even rect fill) in CoreGraphics take a penalty for CPU-based rasterization. Manipulating a buffer of bitmap data would be faster, but not a lot faster.
Getting that buffer onto the screen also takes time, especially if you're having to go through a process of creating bitmap image buffers and then drawing them in a CG context — that's doing a lot of sequential drawing work on the CPU and a lot of memory-bandwidth work to copy that buffer around. So #JustinMeiners is right that direct access to GPU texture memory would be a big help.
However, if you're still filling your buffer in CPU code, you're still hampered by two costs (at best, worse if you do it naively):
sequential work to render each pixel
memory transfer cost from texture memory to frame buffer when rendering
#JustinMeiners' answer is good for his use case — image sequences are pre-rendered, so he knows exactly what each pixel is going to be and he just has to schlep it into texture memory. But your use case requires a lot of per-pixel calculations.
Luckily, per-pixel calculations are what GPUs are designed for! Welcome to the world of pixel shaders. For each pixel on the screen, you can be running an independent calculation to determine the relationship of that point to your fractal set and thus what color to draw it in. The can be running that calculation in parallel for many pixels at once, and its output is going straight to the screen, so there's no memory overhead to dump a bitmap into the framebuffer.
One easy way to work with pixel shaders on iOS is SpriteKit — it can handle most of the necessary OpenGL/Metal setup for you, so all you have to write is the per-pixel algorithm in GLSL (actually, a subset of GLSL that gets automatically translated to Metal shader language on Metal-supported devices). Here's a good tutorial on that, and here's another on OpenGL ES pixel shaders for iOS in general.
If you really want to change many different pixels individually, your best option is probably to allocate a chunk of memory (of size width * height * bytes per pixel), make the changes directly in memory, and then convert the whole thing into a bitmap at once with CGBitmapContextCreateWithData
There may be even faster methods that this (see Justin's answer).
If you want to maximize render speed I would recommend bitmap rendering. Vector rasterization is much slower and CGContext drawing isn't really intended for high performance realtime rendering.
I faced a similar technical challenge and found CVOpenGLESTextureCacheRef to be the fastest. The texture cache allows you to upload a bitmap directly into graphics memory for fast rendering. Rendering utilizes OpenGL, but because its just 2D fullscreen image - you really don't need to learn much about OpenGL to use it.
You can see see an example I wrote of using the texture cache here: https://github.com/justinmeiners/image-sequence-streaming
My original question related to this is here:
How to directly update pixels - with CGImage and direct CGDataProvider
My project renders bitmaps from files so it is a little bit different but you could look at ISSequenceView.m for an example of how to use the texture cache and setup OpenGL for this kind of rendering.
Your rendering procedure could like something like:
1. Draw to buffer (raw bytes)
2. Lock texture cache
3. Copy buffer to texture cache
4. Unlock texture cache.
5. Draw fullscreen quad with texture
I'm using OpenTK in MonoTouch to render some textures in iOS, and some of the textures come up broken. This is a closeup of an iPad screenshot showing one correctly rendered texture (the top one) and two broken ones below:
I'm not doing anything weird. I'm loading the texture from a semitransparent PNG using CGImage->CGBitmapContext->GL.TexImage2D. I'm rendering each sprite with two triangles, and my fragment shader just reads the texel from the sampler with texture2D() and multiplies it by a uniform vec4 to color the texture.
The files themselves seem to be okay, and the Android port of the same application using Mono for Android, and the exact same binary resources renders them perfectly. As you can see, other transparent textures work fine.
If it helps, pretty much every texture is broken when I run the program in the simulator. Also this problem persists even if I rebuild the program.
Any ideas on how to figure out what is causing this problem?
Here's my vertex shader:
attribute vec4 spritePosition;
attribute vec2 textureCoords;
uniform mat4 projectionMatrix;
uniform vec4 color;
varying vec4 colorVarying;
varying vec2 textureVarying;
void main()
{
gl_Position = projectionMatrix * spritePosition;
textureVarying = textureCoords;
colorVarying = color;
}
Here's my fragment shader:
varying lowp vec4 colorVarying;
varying lowp vec2 textureVarying;
uniform sampler2D spriteTexture;
void main()
{
gl_FragColor = texture2D(spriteTexture, textureVarying) * colorVarying;
}
I'm loading the image like this:
using (var bitmap = UIImage.FromFile(resourcePath).CGImage)
{
IntPtr pixels = Marshal.AllocHGlobal(bitmap.Width * bitmap.Height * 4);
using (var context = new CGBitmapContext(pixels, bitmap.Width, bitmap.Height, 8, bitmap.Width * 4, bitmap.ColorSpace, CGImageAlphaInfo.PremultipliedLast))
{
context.DrawImage(new RectangleF(0, 0, bitmap.Width, bitmap.Height), bitmap);
int[] textureNames = new int[1];
GL.GenTextures(1, textureNames);
GL.BindTexture(TextureTarget.Texture2D, textureNames[0]);
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMagFilter, (int)All.Linear);
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMinFilter, (int)All.Linear);
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureWrapS, (int)All.ClampToEdge);
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureWrapT, (int)All.ClampToEdge);
GL.TexImage2D(TextureTarget.Texture2D, 0, PixelInternalFormat.Rgba, bitmap.Width, bitmap.Height, 0, PixelFormat.Rgba, PixelType.UnsignedByte, pixels);
CurrentResources.Add(resourceID, new ResourceData(resourcePath, resourceType, 0, new TextureEntry(textureNames[0], bitmap.Width, bitmap.Height)));
}
}
and in my onRenderFrame, I have this:
GL.ClearColor(1.0f, 1.0f, 1.0f, 1.0f);
GL.Clear(ClearBufferMask.ColorBufferBit);
GL.Enable(EnableCap.Blend);
GL.BlendFunc(BlendingFactorSrc.SrcAlpha, BlendingFactorDest.OneMinusSrcAlpha);
GL.UseProgram(RenderingProgram);
GL.VertexAttribPointer((int)ShaderAttributes.SpritePosition, 2, VertexAttribPointerType.Float, false, 0, squareVertices);
GL.VertexAttribPointer((int)ShaderAttributes.TextureCoords, 2, VertexAttribPointerType.Float, false, 0, squareTextureCoords);
GL.EnableVertexAttribArray((int)ShaderAttributes.SpritePosition);
GL.EnableVertexAttribArray((int)ShaderAttributes.TextureCoords);
//...
GL.ActiveTexture(TextureUnit.Texture0);
GL.BindTexture(TextureTarget.Texture2D, textureEntry.TextureID);
GL.Uniform1(Uniforms[(int)ShaderUniforms.Texture], 0);
// ...
GL.DrawArrays(BeginMode.TriangleStrip, 0, 4);
That triangle strip is made out of two triangles that make up the texture, with the vertex and texture coordinates set to where I want to show my sprite. projectionMatrix is a simple ortographic projection matrix.
As you can see, I'm not trying to do anything fancy here. This is all pretty standard code, and it works for some textures, so I think that in general the code is okay. I'm also doing pretty much the same thing in Mono for Android, and it works pretty well without any texture corruption.
Corrupted colors like that smell like uninitialized variables somewhere, and seeing it happen only on the transparent part leads me to believe that I'm having uninitialized alpha values somewhere. However, GL.Clear(ClearBufferMask.ColorBufferBit) should clear my alpha values, and even so, the background texture has an alpha value of 1, and with the current BlendFunc, should set the alpha for those pixels to 1. Afterwards, the transparent textures have alpha values ranging from 0 to 1, so they should blend properly. I see no uninitialized variables anywhere.
...or... this is all the fault of CGBitmapContext. Maybe by doing DrawImage, I'm not blitting the source image, but drawing it with blending instead, and the garbage data comes from when I did AllocGlobal. This doesn't explain why it consistently happens with just these two textures though... (I'm tagging this as core-graphics so maybe one of the quartz people can help)
Let me know if you want to see some more code.
Okay, it is just as I had expected. The memory I get with Marshal.AllocHGlobal is not initialized to anything, and CGBitmapContext.DrawImage just renders the image on top of whatever is in the context, which is garbage.
So the way to fix this is simply to insert a context.ClearRect() call before I call context.DrawImage().
I don't know why it worked fine with other (larger) textures, but maybe it is because in those cases, I'm requesting a large block of memory, so the iOS (or mono) memory manager gets a new zeroed block, while for the smaller textures, I'm reusing memory previously freed, which has not been zeroed.
It would be nice if your memory was allocated to something like 0xBAADF00D when using the debug heap, like LocalAlloc does in the Windows API.
Two other somewhat related things to remember:
In the code I posted, I'm not releasing the memory requested with AllocHGlobal. This is a bug. GL.TexImage2D copies the texture to VRAM, so it is safe to free it right there.
context.DrawImage is drawing the image into a new context (instead of reading the raw pixels from the image), and Core Graphics only works with premultiplied alpha (which I find idiotic). So the loaded texture will always be loaded with premultiplied alpha if I do it in this way. This means that I must also change the alpha blending function to GL.BlendFunc(BlendingFactorSrc.One, BlendingFactorDest.OneMinusSrcAlpha), and make sure that all crossfading code works over the entire RGBA, and not just the alpha value.
I am drawing a sky sphere as the background for a 3D view. Occasionally, when navigating around the view, there is a visual glitch that pops in:
Example of the glitch: a black shape where rendering has apparently not placed fragments onscreen
Black is the colour the device is cleared to at the beginning of each frame.
The shape of the black area is different each time, and is sometimes visibly many polygons. They are always centred around a common point, usually close to the centre of the screen
Repainting without changing the navigation (eye position and look) doesn't make the glitch vanish, i.e. it does seem to be dependent on specific navigation
The moment navigation is changed, even an infinitesimal amount, it vanishes and the sky draws solidly. The vast majority of painting is correct. Eventually as you move around you will spot another glitch.
Changing the radius of the sphere (to, say, 0.9 of the near/far plane distance) doesn't seem to remove the glitches
Changing Z-buffer writing or the Z test in the effect technique makes no difference
There is no DX debug output (when running with the debug version of the runtime, maximum validation, and shader debugging enabled.)
What could be the cause of these glitches?
I am using Direct3D9 (June 2010 SDK), shaders compiled to SM3, and the glitch has been observed on ATI cards and VMWare Fusion virtual cards on Windows 7 and XP.
Example code
The sky is being drawn as a sphere (error-checking etc removed the the below code):
To create
const float fRadius = GetScene().GetFarPlane() - GetScene().GetNearPlane()*2;
D3DXCreateSphere(GetScene().GetDevicePtr(), fRadius, 64, 64, &m_poSphere, 0);
Changing the radius doesn't seem to affect the presence of glitches.
Vertex shader
OutputVS ColorVS(float3 posL : POSITION0, float4 c : COLOR0) {
OutputVS outVS = (OutputVS)0;
// Center around the eye
posL += g_vecEyePos;
// Transform to homogeneous clip space.
outVS.posH = mul(float4(posL, 1.0f), g_mWorldViewProj).xyzw; // Always on the far plane
Pixel shader
Does't matter, even one outputting a solid colour will glitch:
float4 ColorPS(float altitude : COLOR0) : COLOR {
return float4(1.0, 0.0, 0.0, 1.0);
The same image with a solid-colour pixel shader, to be certain the PS isn't the cause of the problem
Technique
technique BackgroundTech {
pass P0 {
// Specify the vertex and pixel shader associated with this pass.
vertexShader = compile vs_3_0 ColorVS();
pixelShader = compile ps_3_0 ColorPS();
// sky is visible from inside - cull mode is inverted (clockwise)
CullMode = CW;
}
}
I tried adding in state settings affecting the depth, such as ZWriteEnabled = false. None made any difference.
The problem is certainly caused by far plane clipping. If changing the sphere's radius a bit doesn't help, then the sphere's position may be wrong...
Make sure you're properly initializing g_vecEyePos constant (maybe you've mispelled it in one of the DirectX SetShaderConstant functions?).
Also, if you've included the translation to the eye's position in the world matrix of g_mWorldViewProj, you shouldn't do posL += g_vecEyePos; in your VS, because it causes a vertex to be moved twice the eye's position.
In other words you should choose one of these options:
g_mWorldViewProj = mCamView * mCamProj; and posL += g_vecEyePos;
g_mWorldViewProj = MatrixTranslation(g_vecEyePos) * mCamView * mCamProj;
UPDATE
I got around CG's limitations by drawing everything with OpenGL. Still some glitches, but so far it's working much, much faster.
Some interesting points :
GLKView : That's an iOS-specific view, and it helps a lot in setting up the OpenGL context and rendering loop. If you're not on iOS, I'm afraid you're on your own.
Shader precision : The precision of shader variables in the current version of OpenGL ES (2.0) is 16-bit. That was a little low for my purposes, so I emulated 32-bit arithmetics with pairs of 16-bit variables.
GL_LINES : OpenGL ES can natively draw simple lines. Not very well (no joints, no caps, see the purple/grey line on the top of the screenshot below), but to improve that you'll have to write a custom shader, convert each line into a triangle strip and pray that it works! (supposedly that's how browsers do that when they tell you that Canvas2D is GPU-accelerated)
Draw as little as possible. I suppose that makes sense, but you can frequently avoid rendering things that are, for instance, outside of the viewport.
OpenGL ES has no support for filled polygons, so you have to tesselate them yourself. Consider using iPhone-GLU : that's a port of the MESA code and it's pretty good, although it's a little hard to use (no standard Objective-C interface).
Original Question
I'm trying to draw lots of CGPaths (typically more than 1000) in the drawRect method of my scroll view, which is refreshed when the user pans with his finger. I have the same application in JavaScript for the browser, and I'm trying to port it to an iOS native app.
The iOS test code is (with 100 line operations, path being a pre-made CGMutablePathRef) :
- (void) drawRect:(CGRect)rect {
// Start the timer
BSInitClass(#"Renderer");
BSStartTimedOp(#"Rendering");
// Get the context
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetLineWidth(context, 2.0);
CGContextSetFillColorWithColor(context, [[UIColor redColor] CGColor]);
CGContextSetStrokeColorWithColor(context, [[UIColor blueColor] CGColor]);
CGContextTranslateCTM(context, 800, 800);
// Draw the points
CGContextAddPath(context, path);
CGContextStrokePath(context);
// Display the elapsed time
BSEndTimedOp(#"Rendering");
}
In JavaScript, for reference, the code is (with 10000 line operations) :
window.onload = function() {
canvas = document.getElementById("test");
ctx = canvas.getContext("2d");
// Prepare the points before drawing
var data = [];
for (var i = 0; i < 100; i++) data.push ({x: Math.random()*canvas.width, y: Math.random()*canvas.height});
// Draw those points, and write the elapsed time
var __start = new Date().getTime();
for (var i = 0; i < 100; i++) {
for (var j = 0; j < data.length; j++) {
var d = data[j];
if (j == 0) ctx.moveTo (d.x, d.y);
else ctx.lineTo(d.x,d.y)
}
}
ctx.stroke();
document.write ("Finished in " + (new Date().getTime() - __start) + "ms");
};
Now, I'm much more proficient in optimizing JavaScript than I am at iOS, but, after some profiling, it seems that CGPath's overhead is absolutely, incredibly bad compared to JavaScript. Both snippets run at about the same speed on a real iOS device, and the JavaScript code has 100x the number of line operations of the Quartz2D code!
EDIT: Here is the top of the time profiler in Instruments :
Running Time Self Symbol Name
6487.0ms 77.8% 6487.0 aa_render
449.0ms 5.3% 449.0 aa_intersection_event
112.0ms 1.3% 112.0 CGSColorMaskCopyARGB8888
73.0ms 0.8% 73.0 objc::DenseMap<objc_object*, unsigned long, true, objc::DenseMapInfo<objc_object*>, objc::DenseMapInfo<unsigned long> >::LookupBucketFor(objc_object* const&, std::pair<objc_object*, unsigned long>*&) const
69.0ms 0.8% 69.0 CGSFillDRAM8by1
66.0ms 0.7% 66.0 ml_set_interrupts_enabled
46.0ms 0.5% 46.0 objc_msgSend
42.0ms 0.5% 42.0 floor
29.0ms 0.3% 29.0 aa_ael_insert
It is my understanding that this should be much faster on iOS, simply because the code is native... So, do you know :
...what I am doing wrong here?
...and if there's another, better solution to draw that many lines in real-time?
Thanks a lot!
As you described on question, using OpenGL is the right solution.
Theoretically, you can emulate all kind of graphics drawing with OpenGL, but you need to implement all shape algorithm yourself. For example, you need to extend edge corners of lines yourself. There's no concept of lines in OpenGL. The line drawing is kind of utility feature, and almost used only for debugging. You should treat everything as a set of triangles.
I believe 16bit floats are enough for most drawings. If you're using coordinates with large numbers, consider dividing space into multiple sector to make coordinate numbers smaller. Floats' precision become bad when it's going to very large or very small.
Update
I think you will meet this issue soon if you try to display UIKit over OpenGL display. Unfortunately, I also couldn't find the solution yet.
How to synchronize OpenGL drawing with UIKit updates
You killed CGPath performance by using CGContextAddPath.
Apple explicitly says this will run slowly - if you want it to run fast, you are required to attach your CGPath objects to CAShapeLayer instances.
You're doing dynamic, runtime drawing - blocking all of Apple's performance optimizations. Try switching to CALayer - especially CAShapeLayer - and you should see performance improve by a large amount.
(NB: there are other performance bugs in CG rendering that might affect this use case, such as obscure default settings in CG/Quartz/CA, but ... you need to get rid of the bottleneck on CGContextAddPath first)
My question is similar to OpenCV: Detect blinking lights in a video feed
openCV detect blinking lights
I want to detect LED on/off status from any image which will have LED object. LED object can be of any size ( but mostly circle ). It is important to get the location of all the LEDs in that image although it can be ON or OFF. First of all I would like to get the status and position of LEDs which are ON only. Right now image source is static for my work out but it must be from video of any product having glowing LEDs. So there is no chance of having template image to substract the background.
I have tried using OpenCV (new to OpenCV) with threshold, Contours and Circles methods but not found sucessful. Please share if any source code or solution. The solution can be anything not only using OpenCV which would give result for me. It would be greatly appreciated.
The difference from other two question is that I want to get the number of LEDs in the image whether it can be ON or OFF and status of all LEDs. I know this is very complex. First of all I was trying to detect glowing LEDs in the image. I have implemented the code which i shared below. I had different implementations but below code is able to show me the glowing LEDs just by drawing the contours but number of contours are more than the glowing LEDs. So I am not able to get the total number of LEDs glowing atleast. Please suggest me your inputs.
int main(int argc, char* argv[])
{
IplImage* newImg = NULL;
IplImage* grayImg = NULL;
IplImage* contourImg = NULL;
float minAreaOfInterest = 180.0;
float maxAreaOfInterest = 220.0;
//parameters for the contour detection
CvMemStorage * storage = cvCreateMemStorage(0);
CvSeq * contours = 0;
int mode = CV_RETR_EXTERNAL;
mode = CV_RETR_CCOMP; //detect both outside and inside contour
cvNamedWindow("src", 1);
cvNamedWindow("Threshhold",1);
//load original image
newImg = cvLoadImage(argv[1], 1);
IplImage* imgHSV = cvCreateImage(cvGetSize(newImg), 8, 3);
cvCvtColor(newImg, imgHSV, CV_BGR2HSV);
cvNamedWindow("HSV",1);
cvShowImage( "HSV", imgHSV );
IplImage* imgThreshed = cvCreateImage(cvGetSize(newImg), 8, 1);
cvInRangeS(newImg, cvScalar(20, 100, 100), cvScalar(30, 255, 255), imgThreshed);
cvShowImage( "src", newImg );
cvShowImage( "Threshhold", imgThreshed );
//make a copy of the original image to draw the detected contour
contourImg = cvCreateImage(cvGetSize(newImg), IPL_DEPTH_8U, 3);
contourImg=cvCloneImage( newImg );
cvNamedWindow("Contour",1);
//find the contour
cvFindContours(imgThreshed, storage, &contours, sizeof(CvContour), mode, CV_CHAIN_APPROX_SIMPLE, cvPoint(0,0));
int i = 0;
for (; contours != 0; contours = contours->h_next)
{
i++;
//ext_color = CV_RGB( rand()&255, rand()&255, rand()&255 ); //randomly coloring different contours
cvDrawContours(contourImg, contours, CV_RGB(0, 255, 0), CV_RGB(255, 0, 0), 2, 2, 8, cvPoint(0,0));
}
printf("Total Contours:%d\n", i);
cvShowImage( "Contour", contourImg );
cvWaitKey(0);
cvDestroyWindow( "src" ); cvDestroyWindow( "Threshhold" );
cvDestroyWindow( "HSV" );
cvDestroyWindow( "Contour" );
cvReleaseImage( &newImg ); cvReleaseImage( &imgThreshed );
cvReleaseImage( &imgHSV );
cvReleaseImage( &contourImg );
}
I has some time yesterday night, here is a (very) simple and partial solution that works fine for me.
I created a git repository that you can directly clone :
git://github.com/jlengrand/image_processing.git
and run using Python
$ cd image_processing/LedDetector/
$ python leddetector/led_highlighter.py
You can see the code here
My method :
Convert to one channel image
Search for brightest pixel, assuming that we have at least one LED on and a dark background as on your image
Create a binary image with the brightest part of the image
Extract the blobs from the image, retrieve their center and the number of leds.
The code only takes an image into account at this point, but you can enhance it with a loop to take a batch of images (I already provides some example images in my repo.)
You simply have to play around a bit with the center found for LEDs, as they might not be one pixel accurate from one image to another (center could be slightly shifted).
In order to get the algorithm more robust (know whether there is a LED on or not, find an automatic and not hard coded margin value), you can play around a bit with the histogram (placed in extract_bright).
I already created the function for that you should just have to enhance it a bit.
Some more information concerning the input data :
Opencv does only accept avi files for now, so you will have to convert the mp4 file to avi (uncompressed in my case). I used this, that worked perfectly.
For some reason, the queryframe function caused memory leaks on my computer. That is why I created the grab_images functions, that takes the avi file as input and creates a batch of jpg images that you can use easier.
Here is the result for an image :
Input image :
Binary image :
Final result :
Hope this helps. . .
EDIT :
Your problem is slightly more complex if you want to use this image. The method I posted could still be used, but needs to be a bit complexified.
You want to detect the leds that display 'an information' (status, bandwidth, . . . ) and discard the design part.
I see three simple solutions to this :
you have a previous knowledge of the position of the leds. In this case, you can apply the very same method, but on a precise part of the whole image (using cv.SetImageROI).
you have a previsous knowledge of the color of the leds (you can see on the image that there are two different colors). Then you can search the whole image, and then apply a color filter to restrain your choice.
you have no previous knowledge. In this case, things get a bit more complex. I would tend to say that leds that are not useful should all have the same color, and that status leds usually blink. This means that by adding a learning step to the method, you might be able to see which leds actually have to be selected as useful.
Hope this brings some more food for thoughts