Best way to scale images for finding similarity? [closed] - image-processing

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I need to calculate the color histogram of images in order to get a feature for finding similarity between images.
(description: https://stackoverflow.com/a/844113/5142270 and https://en.wikipedia.org/wiki/Color_histogram).
The only problem I am facing is in deciding how to scale the images so that they will have the same number of pixels. Is there a standard image size(in pixels) that is used by researchers for this purpose, when there are thousands of images that can be of any dimension? I tried searching a lot on how to scale the images, but was unable to find out what was supposed to be done.
Thanks in advance.

You can try using pyramids.
You basically don't have 1 'golden number' of pixels, but you do your feature finding on an image 1/2 the size, and 1/4 and 1/8 and so on, so your feature detection will not be size dependent.

Related

Mapping YOLO results onto 2D plan [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I'm using YOLO to detect people in a video stream from camera and would like to "map" founded bonding boxes onto 2D plan of the room.
Could you please give me a hint which algorithms might be used for it?
The idea is shown on the picture from the github repository, but I need not to measure distance but "project" an object position on 2D map of the room
https://github.com/sassoftware/iot-tracking-social-distancing-computer-vision
Using 3D cameras or just 2 regular ones might help a lot as well

Is it possible to gather the image matrix of a pygame gameboard without rendering the image to the screen? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I wrote a 2d simulation (very similar to the Atari- OpenAi games) in pygame, which I need for an reinforcement learning project. I'd like to train a neural network using mainly image data, i.e. screenshots of the pygame gameboard.
I am able to make those screenshots, but:
- Is possible to gather this image data - or, more precisely, the
corresponding rgb image matrix - also without rendering the whole
playing ground to the screen?
As I figured out there is the possibility to do such in pyglet ... But I would like to avoid to rewrite the whole simulation.
Basically, yes. You don't have to actually draw anything to the screen surface.
Once you have a Surface, you can use methods like get_at, the PixelArray module or the surfarray module to access the RGB(A)-values of each pixel.

Hair Counting Algorithm [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I'm trying to implement an algorithm detecting and counting number of hairs. The main idea is described as below:
Enhance Image by applying Contrast Stretching.
Segment image.
Do thinning segmented image.
Detect Line by HoughLine Transform and Relaxation.
The implementation is based on Opencv/C++. However, since Thinning algorithm doesn't perform accurately, it leads to wrong result when I apply HoughLine, especially in case of overlap or touching hair. Moreover HoughLine is sensitive with parameters. If you have other ideas, please help me. Thank you very much.

Swift: How can i get height of an object with the camera? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Hi i would like measure an object in 'cm' (or similar) of an object obtained with the camera
Any idea?
Thanks!!
You can't get a precise measurement.
You would need to input roughly how far away the object is from the camera in inches.
You need to measure how many pixels tall the item is that you want to measure.
Using the pixels measured, combined with the DPI of the camera and the distance the camera is from that object and some estimated angles then you can work out an approximate height of the object in inches using trigonometry.

The drawbacks of the LBP image feature extraction method [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
The LBP feature has the drawback that it is not too robust on flat image areas.
Questions:
What is a flat image?
What do we mean by "not being robust on flat
image areas"?
An image region is said to be "flat" if it has a nearly uniform intensity. In other words, the variance of the intensity values within the region is very low.
The LBP feature is not robust on "flat" image areas since it is based on intensity differences. Within flat image regions, the intensity differences are of small magnitude and highly affected by image noise. Moreover, they are ignorant of the actual intensity level at the location they are computed on.

Resources