I've stumbled upon a somewhat easy question, yet it differs from regular histogram drawing:
"Sketch a histogram of the 4-bit image shown below:"
I know that histogram is drawn from collecting some data and it's frequency, and then drawing higher waves when higher frequency in the histogram.
I'm guessing this table is supposed to represent an image and the numbers probably the intensity of some color or grey-level... I don't really know how to collect the data and frequency from it, just take each number in particular and how many times it appears?
I know the answer should be simple ^^
Thank you
In a 4-bit image gray scale values ranges from 0-15.
Apart from 18th (and plus) row(s) of your excel sheet all is good.
I've drawn this into an excel table, is this correct or should it be done in a different way?
The whole "4-bit image" thing confuses me...
Thank you
Related
I'm having a hard time understanding how to identify the labels in the ade20k dataset.
I was looking at the csv
https://github.com/CSAILVision/sceneparsing/blob/master/objectInfo150.csv
and grabbed one example for floor , index 4
and then looked at one sample image ADE_train_00000001.png which in photoshop looks like the following when selecting a pixel on the the floor
in that screenshot, the floor has an rgb value of 3 for all channels, and assuming the index is used as the rgb value, shouldn't it be rgb(4,4,4) in photoshop?
I must be misunderstanding, can you help explain?
it turns our photoshop was giving different values compared to Digital Color Meter (mac app)
I want to capture one frame with all the frames stored in a database. This frame is captured by the mobile phone, while the database is with the original ones. I have been searching most days in order to find a good method to compare them, taking into account that they have not the same resolution, colors and luminance, etc. Does anyone have an idea?
I have already done the preprocessing step of the captured frame to be as faithful as possible than the original one with C++ and the OpenCV library. But then, I do not know what can be a good feature to compare them or not.
Any comment will be very helpful, thank you!
EDIT: I implemented an algorithm which compares the difference between the two images resized to 160x90, in grayscale and quantized. The results are the following:
The mean value of the image difference is 13. However, if I use two completely different images, the mean value of the image difference is 20. So, I do not know if this measure can be improved on some manner in order to have a better margin for the matching.
Thanks for the help in advance.
Cut the color depth from 24-bits per pixel (or whatever) to 8 or 16 bits per pixel. You may be able use a posterize function for this. Then resize both images to a small size (maybe 16x16 or 100x100, depending on your images), and then compare. This should match similar images fairly closely. It will not take into account different rotation and locations of objects in the image.
I have a sheet of paper on another surface. I want to figure out when the paper ends and the surface begins.
I'm using this approach:
1. Convert image to pixel array
2. Pick 3 random 20x20 squares and frequency count the colors
3. The highest frequency is the background
However, the problem is that I get over 100 colors every time I do this on an actual image taken by the camera.
I think I can fix it by putting the image in 16 colors palette. Is there a way to do this change on a UIImage or CGImage?
Thanks
Your colours are probably very close together. How about calculating the distance (the cumulative absolute difference between red, green and blue values) from each sampled colour to a reference colour - just use the first one you sample as reference. If the distance is large, you have a different colour. If the distance is small, you have the same colour with minor variations in lighting or other camera artefacts.
Basically this is applying a filter in a very simple manner. It is up to you to decide how big the difference has to be for the colours to be considered different, but you could decide that by looking at the median difference of all the colours and grouping them into over/under samples.
You might also get good results from applying a Core Image filter to the sample images, such as CIColorClamp (CISpotColor looks better but is OS X only). if you can find a suitable filter there is a good chance it will be simpler and faster than doing it yourself.
SHORT: is there a function in OpenCV or a general algorithm which could return an index for image homogenity?
LONG VERSION :-) I am implementing auto-focus based on image-data evaluation. My images are biological cells, which are spread fairly in similar density across the image area. Unfortunatelly, sometimes my algorithm is disturbed by dirt on the cover glass, which are mostly a few bright spots. So my idea is, to discard focus-function peaks caused by inhomogenious images.
Thank you for any suggestions!
Example images as requested: (not the best ones, but should fairly show the problem)
The left image captured at wrong Z-position because of dirt. The right one is OK.
Looking at the image, you could split it up in different parts (say 4x4 subimages), compute variance in each sub image, and see if the difference between lowest and highest variance is big.
I want to convert a 24bit RGB image (8 bit for each channel) 8 bit using an indexed color palette.
My initial idea was to create an array and simply count the amount of times each color was represented in the image, but I figured it would be wasteful if there were large areas with slight change in color that used up all of the palette space in favor of smaller, but maybe more significant color groups.
Once I complete building the palette, my idea was to consider each RGB color as a 3-dimensional matrix and compare its dot product with each entry in the palette.
...
As you might see, I'm not completely in on the terminology, but I hope you get what I mean :)
My question is; Is anyone able to share insights on how to approach this or perhaps put me in the right direction to any reading material online?
thanks!
According to Paul Heckbert's paper from 1982 popularity algorithm is inferior to Median Cut.
There's family of Median-Cut like (space subdivision) algorithms that choose different criteria, e.g. minimize variance of colors in each partition).
There's fast, but ugly subdivision using Octtree.
There are clustering algorithms such as K-Means and Linde-Buzo-Gray.
An interesting odd one is NeuQuant neural network.
I'm still trying to figure out the best one for pngquant.
You're looking for color quantization.