Hi I'm using EmguCV and I enjoy programming with it.
However I'm wondering whether there is an elegant way to add two pixels individually.
To add images, you can use CvInvoke.Add(), but for individual pixel operation, you seems to have to write it in an ugly way,
say you have p, p1 and p2 as EmguCV::Bgr,
you have to write
p = new Bgr(p1.b + p2.b, p1.g + p2.g, p1.r + p2.r);
I really hate this and tried to write an operator for this. But this is apparently impossible since operator overloading must be in the host class.
Is there any way to do this elegantly?
================Edit================
What I want to do is to calculate the summation of the pixels in an image. So the basic operation in this is to add pixels, or Bgr class.
Let's suppose you have two images img1 and img2:
If you want to add them you can do img3 = img1+ img2
If you simply want the summation of each color channel on a single image img1 you can do:
Bgr sums = img1.GetSum();
double TotalVal = sums.Blue + sums.Green + sums.Red;
Hope this helps,
Luca
Related
My binary image has lots of noise (small white blobs about 3-6 pixels in area). Can the function skimage.morphology.remove_small_objects() be used to remove these small blobs?
In my experimentation, the function leaves the image unchanged. Am I using the function incorrectly or is the function not suited to what I want to achieve?
src = cv2.imread('plan4.png')
src = cv2.GaussianBlur(src, (3,3), 1)
edges = get_edges(src.copy())
noise_reduced = morphology.remove_small_objects(edges .copy(), 2,)
cv2.imshow('src', src)
cv2.imshow('noise_reduced', noise_reduced)
cv2.imshow('edges ', edges )
Below is the original with small white blobs (that I want to remove) and the result of remove_small_objects() notice they are the same and no blobs are removed. *Note: morphological closing or opening the image would remove these small blobs but it also degrades my lines too much. I really prefer finding blobs whose area is ~6 pixels and deleting those.
When you pass in an integer image, scikit-image assumes that all the same-valued pixels belong to the same object, even if they are not connected. So, in your case, all the pixels are considered part of the same (big) object, so none are removed. Instead, you should do use
from skimage.measure import label
noise_reduced = morphology.remove_small_objects(label(edges), 2,)
Hope this helps!
I am trying to read a handwritten form which has boxed-input.
I have run tesseract on the image but get strange results. In my understanding, I suppose the best thing to do is to detect the bounding box and minus that from the image. What's the best way to detect the box (semi-box around the character). I tried cv2.HoughLines(), but with no result.
I am new to OpenCV. It will be really helpful if someone can help me out here.
Thanks for your idea. I just realized probably i can look at counting the vertical pixels and greater than certain threshold
def get_pixel_count_in_col(img,col):
count=0
for j in range(img.shape[0]):
if(img[j,col]<255):
count=count+1
return count
def cleanup_img(img):
foundlines=[]
for i in range(img.shape[1]):
if(get_pixel_count_in_col(img,i)>img.shape[0]*0.7):
foundlines.append(i)
if(get_pixel_count_in_col(img,i-1)>img.shape[0]*0.25):
foundlines.append(i-1)
if(get_pixel_count_in_col(img,i+1)>img.shape[0]*0.25):
foundlines.append(i+1)
return np.delete(img,foundlines,1)
The resulting image makes more sense. But is there any other easy way to do this ?
It seems that your input format is quite clean and consistent. You can simply hard-code the width of each box in pixels and crop out the characters. However if the input format is not fixed then we can extend this answer to handle that as well(it would be bit expensive), so as the first attempt we would simply go with hard coding the width of boxes in pixels.
def get_image_chunks(img, size):
chunks = []
# To remove black borders
padding = 2
for i in xrange(0, img.shape[1], size):
col_start = i + padding
col_end = i + size - padding
# Slicing the numpy array.
chunks.append(img[:-padding, col_start:col_end])
return chunks
img = cv2.imread("/Users/anmoluppal/Downloads/GLUmJ.jpg", 0)
chunks = get_image_chunks(img, 42)
Outputs:
;
;
1.Introduction:
So I want to develop a special filter method for uiimages - my idea is to change from one picture all the colors to black except a certain color, which should keep their appearance.
Images are always nice, so look at this image to get what I'd like to achieve:
2.Explanation:
I'd like to apply a filter (algorithm) that is able to find specific colors in an image. The algorithm must be able to replace all colors that are not matching to the reference colors with e.g "black".
I've developed a simple code that is able to replace specific colors (color ranges with threshold) in any image.
But tbh this solution doesn't seems to be a fast & efficient way at all!
func colorFilter(image: UIImage, findcolor: String, threshold: Int) -> UIImage {
let img: CGImage = image.cgImage!
let context = CGContext(data: nil, width: img.width, height: img.height, bitsPerComponent: 8, bytesPerRow: 4 * img.width, space: CGColorSpaceCreateDeviceRGB(), bitmapInfo: CGImageAlphaInfo.premultipliedLast.rawValue)!
context.draw(img, in: CGRect(x: 0, y: 0, width: img.width, height: img.height))
let binaryData = context.data!.assumingMemoryBound(to: UInt8.self),
referenceColor = HEXtoHSL(findcolor) // [h, s, l] integer array
for i in 0..<img.height {
for j in 0..<img.width {
let pixel = 4 * (i * img.width + j)
let pixelColor = RGBtoHSL([Int(binaryData[pixel]), Int(binaryData[pixel+1]), Int(binaryData[pixel+2])]) // [h, s, l] integer array
let distance = calculateHSLDistance(pixelColor, referenceColor) // value between 0 and 100
if (distance > threshold) {
let setValue: UInt8 = 255
binaryData[pixel] = setValue; binaryData[pixel+1] = setValue; binaryData[pixel+2] = setValue; binaryData[pixel+3] = 255
}
}
}
let outputImg = context.makeImage()!
return UIImage(cgImage: outputImg, scale: image.scale, orientation: image.imageOrientation)
}
3.Code Information The code above is working quite fine but is absolutely ineffective. Because of all the calculation (especially color conversion, etc.) this code is taking a LONG (too long) time, so have a look at this screenshot:
My question I'm pretty sure there is a WAY simpler solution of filtering a specific color (with a given threshold #c6456f is similar to #C6476f, ...) instead of looping trough EVERY single pixel to compare it's color.
So what I was thinking about was something like a filter (CIFilter-method) as alternative way to the code on top.
Some Notes
So I do not ask you to post any replies that contain suggestions to use the openCV libary. I would like to develop this "algorithm" exclusively with Swift.
The size of the image from which the screenshot was taken over time had a resolution of 500 * 800px
Thats all
Did you really read this far? - congratulation, however - any help how to speed up my code would be very appreciated! (Maybe theres a better way to get the pixel color instead of looping trough every pixel) Thanks a million in advance :)
First thing to do - profile (measure time consumption of different parts of your function). It often shows that time is spent in some unexpected place, and always suggests where to direct your optimization effort. It doesn't mean that you have to focus on that most time consuming thing though, but it will show you where the time is spent. Unfortunately I'm not familiar with Swift so cannot recommend any specific tool.
Regarding iterating through all pixels - depends on the image structure and your assumptions about input data. I see two cases when you can avoid this:
When there is some optimized data structure built over your image (e.g. some statistics in its areas). That usually makes sense when you process the same image with same (or similar) algorithm with different parameters. If you process every image only once, likely it will not help you.
When you know that the green pixels always exist in a group, so there cannot be an isolated single pixel. In that case you can skip one or more pixels and when you find a green pixel, analyze its neighbourhood.
I do not code on your platform but...
Well I assume your masked areas (with the specific color) are continuous and large enough ... that means you got groups of pixels together with big enough areas (not just few pixels thick stuff). With this assumption you can create a density map for your color. What I mean if min detail size of your specific color stuff is 10 pixels then you can inspect every 8th pixel in each axis speeding up the initial scan ~64 times. And then use the full scan only for regions containing your color. Here is what you have to do:
determine properties
You need to set the step for each axis (how many pixels you can skip without missing your colored zone). Let call this dx,dy.
create density map
simply create 2D array that will hold info if center pixel of region is set with your specific color. so if your image has xs,ys resolution than your map will be:
int mx=xs/dx;
int my=ys/dy;
int map[mx][my],x,y,xx,yy;
for (yy=0,y=dy>>1;y<ys;y+=dy,yy++)
for (xx=0,x=dx>>1;x<xs;x+=dx,xx++)
map[xx][yy]=compare(pixel(x,y) , specific_color)<threshold;
enlarge map set areas
now you should enlarge the set areas in map[][] to neighboring cells because #2 could miss edge of your color region.
process all set regions
for (yy=0;yy<my;yy++)
for (xx=0;xx<mx;xx++)
if (map[xx][yy])
for (y=yy*dy,y<(yy+1)*dy;y++)
for (x=xx*dx,x<(xx+1)*dx;x++)
if (compare(pixel(x,y) , specific_color)>=threshold) pixel(x,y)=0x00000000;
If you want to speed up this even more than you need to detect set map[][] cells that are on edge (have at least one zero neighbor) you can distinquish the cells like:
0 - no specific color is present
1 - inside of color area
2 - edge of color area
That can be done by simply in O(mx*my). After that you need to check for color only the edge regions so:
for (yy=0;yy<my;yy++)
for (xx=0;xx<mx;xx++)
if (map[xx][yy]==2)
{
for (y=yy*dy,y<(yy+1)*dy;y++)
for (x=xx*dx,x<(xx+1)*dx;x++)
if (compare(pixel(x,y) , specific_color)>=threshold) pixel(x,y)=0x00000000;
} else if (map[xx][yy]==0)
{
for (y=yy*dy,y<(yy+1)*dy;y++)
for (x=xx*dx,x<(xx+1)*dx;x++)
pixel(x,y)=0x00000000;
}
This should be even faster. In case your image resolution xs,ys is not a multiple of region size mx,my you should handle the outer edge of image either by zero padding or by special loops for that missing part of image...
btw how long it takes to read and set your whole image?
for (y=0;y<ys;y++)
for (x=0;x<xs;x++)
pixel(x,y)=pixel(x,y)^0x00FFFFFF;
if this alone is slow than it means your pixel access is too slow and you should use different api for this. That is very common mistake on Windows GDI platform as people usually use Pixels[][] which is slower than crawling snail. there are other ways like bitlocking/blitting,ScanLine etc so in such case you need to look for something fast on your platform. If you are not able to speed even this stuff than you can not do anything else ... btw what HW is this run on?
I'm attempting to make a photo effect where you subtract one or two channels from a red-green-blue channel triple. Suppose, for example, I don't want any green or red in my final image. One way to do this is to simply zero the green and red components. However, I lose the edges, shape, and shading of many objects with that approach. What I really want is more of a "grayscale with blue hints" effect (especially if that blue can represent the original blue that was in the image). What formula do I use for this?
B = R*0.299 + G*0.587 + B*0.114
R = G = 0
Blue = 0.299×Red + 0.587×Green + 0.114×Blue
This formula is quite popular but its incorrect. It will not give you good results. For correct results you might want to go with below formula: first convert to a linear colorspace, then use different weights:
Blue = 0.2126×Red + 0.7152×Green + 0.0722×Blue
Correct approximation is :
Blue = (0.2126×Red^(2.2) + 0.7152×Green^(2.2) + 0.0722×Blue^(2.2))^(1/2.2)
Green=Red=0
i have a 128x128 array of elevation data (elevations from -400m to 8000m are displayed using 9 colors) and i need to resize it to 512x512. I did it with bicubic interpolation, but the result looks weird. In the picture you can see original, nearest and bicubic. Note: only the elevation data are interpolated not the colors themselves (gamut is preserved). Are those artifacts seen on the bicubic image result of my bad interpolation code or they are caused by the interpolating of discrete (9 steps) data?
http://i.stack.imgur.com/Qx2cl.png
There must be something wrong with the bicubic code you're using. Here's my result with Python:
The black border around the outside is where the result was outside of the palette due to ringing.
Here's the program that produced the above:
from PIL import Image
im = Image.open(r'c:\temp\temp.png')
# convert the image to a grayscale with 8 values from 10 to 17
levels=((0,0,255),(1,255,0),(255,255,0),(255,0,0),(255,175,175),(255,0,255),(1,255,255),(255,255,255))
img = Image.new('L', im.size)
iml = im.load()
imgl = img.load()
colormap = {}
for i, color in enumerate(levels):
colormap[color] = 10 + i
width, height = im.size
for y in range(height):
for x in range(width):
imgl[x,y] = colormap[iml[x,y]]
# resize using Bicubic and restore the original palette
im4x = img.resize((4*width, 4*height), Image.BICUBIC)
palette = []
for i in range(256):
if 10 <= i < 10+len(levels):
palette.extend(levels[i-10])
else:
palette.extend((i, i, i))
im4x.putpalette(palette)
im4x.save(r'c:\temp\temp3.png')
Edit: Evidently Python's Bicubic isn't the best either. Here's what I was able to do by hand in Paint Shop Pro, using roughly the same procedure as above.
While bicubic interpolation can sometimes generate interpolating values outside the original range (can you verify if this is happening to you?) It really seems like you may have a bug, but it is hard to say without looking at the code. As a general rule the bicubic solution should be smoother than the nearest neighbor solution.
Edit: I take that back, I see no interpolating values outside the original range in your images. Still, I think the strange part is the "jaggedness" you get when using bicubic, you may want to double check that.