In computer Vision, I would like to ask if there is any method that would match same images having different magnifications?
For ex: let's say there is an image with 200x magnifications and the other with 900x magnifications (both images are same only different magnifications)
Is there any method by which this can be done?
Like by using deep learning algorithm?
Related
I am working on a project where my task is to identify machine part by its part number written on label attached to it or engraved on its surface. One such example of label and engraved part is shown in below figures.
My task is to recognise 9 or 10 alphanumerical number (03C 997 032 D in 1st image and 357 955 531 in 2nd image). This seems to be easy task however I am facing problem in distinguishing between useful information in the image and rest of the part i.e. there are many other numbers and characters in both image and I want to focus on only mentioned numbers. I tried many things but no success as of now. Does anyone know the image pre processing methods or any ML/DL model which I should apply to get desired result?
Thanks in advance!
JD
You can use OCR to the get all characters from the image and then use regular expressions to extract the desired patterns.
You can use OCR method, like Tesseract.
Maybe, you want to clean the images before running the text-recognition system, by performing some filtering to remove noise / remove extra information, such as:
Convert to gray scale (colors are not relevant, aren't them?)
Crop to region of interest
Canny Filter
A good start can be one of this tutorial:
OpenCV OCR with Tesseract (Python API)
Recognizing text/number with OpenCV (C++ API)
This is a bit of an abstract question.
I have a group of 28x28 px images from certain people, and I would like to label that data with each person who wrote it. How would I go about labeling it for training and testing? This is my first neural network, and I'm having difficulty finding any tutorials that suit my particular need. It feels like most Data, like MNIST/EMNIST, are already labeled.
Some more info is that I'm using Python 3, and Keras with Tensorflow backend.
I am assuming that you know who wrote each image. Then this is a matter of associating that information (the class label) with each image. There are several ways of doing this. Two common approaches are:
Folder structure
Make a folder for each class (person), and put the images inside.
Folder contents:
john/01.png
john/02.png
jane/03.png
susan/...
CSV file
In this case the images can be all in one folder, and then a dedicate Comma-Separated-Values file is used to contain
Folder contents:
dataset.csv
images/01.png
images/02.png
images/03.png
images/....
dataset.csv contents:
filename,person
images/01.png,john
images/02.png,john
images/03.png,jane
...
The CSV approach is nice if you have additional data about each file that you want to store. For instance metadata that could be relevant such as who recorded the file, when was it recorded, with what kind of equipment, what locations etc.
Combinations of the two are also possible, of course.
I want to use NiftyNet to implement Deep Learning on medical image processing. However, there is one thing I haven't figured out regarding the data input: how does it join the multi-modality images? I saw the demo of BRATS2017, they seems to use 4 different modalities, and in the configuration file, they just included the directory of the images and they claim it will "concatenate" the images. But I want to know more, as those images are 3D, how are they concatenated? [slice1-30]:[slice1-30].. or [slice1, slice1, slice1 ...]:[slice2, slice2, slice2...]?
And can we control the data organization part? If so, which file should I modify?
Any suggestion would be greatly appreciated!
In this case, the 3D images are concatenated in an additional dimension. You control the order they're concatenated in by specifying the order of files to load in the *.ini files.
However, as long as you're consistent, it shouldn't matter what order the modalities go in.
The images are concatenated in the channel dimension. For 2D images, the dimensions are NSSC: batch size, 2 spatial dimensions, then channel. For 3D images, the dimensions are NSSSC: batch size, 3 spatial dimensions, then channel.
I'm trying to implement Content-based image retrieval in my application. I found a LIRE library that look pretty good.
I need to analyze my image collection for similar(from human point of view) images. In my catalog I have a big amount of absolutely different uncategorized/unstructured images
In order to analyze images LIRE contains following list of algorithms:
CEDD,
AutoColorCorrelogram,
BinaryPatternsPyramid,
ColorLayout,
EdgeHistogram,
FCTH,
FuzzyColorHistogram,
Gabor,
JCD,
JointHistogram,
JpegCoefficientHistogram,
LocalBinaryPatterns,
LuminanceLayout,
OpponentHistogram,
PHOG,
RankAndOpponent,
RotationInvariantLocalBinaryPatterns,
ScalableColor,
SimpleCentrist,
SimpleColorHistogram,
SPACC,
SpatialPyramidCentrist,
SPCEDD,
SPFCTH,
SPJCD,
SPLBP,
Tamura
Based on your experience, could you please recommend one of them that can be most suitable(from human point of view) for such kind of image suite(mix of uncategorized images) in order to find a similar images?
I think JCD is the best one because combine two approach at the same time, and each approach is combining two features (color & texture).
I am trying to determine when a food packaging have error or not error. Example
the logo " McDonald's " have error misprints or not, as the wrong label, wrong color..( i can not post picture )
What should I do, please help me!!
It's not a trivial task by any stretch of the imagination. Two images of the same identical object will always be different according to lightning conditions, perspective, shooting angle, etc.
Basically you need to:
1. Process the 2 images into "digested" data - dominant color, shapes, etcw
2. Design and run your own similarity algorithm between the 2 objects
You may want to look at Feature detectors in OpenCV: Surf, SIFT, etc.
Along a result I just found your question, so I think I come too late.
If not I think your problem car easily be resolved, it exists since years and is called Sikuli .
While it's for testing purposes, I have been using it in the same way as you need : compare a reference and a production image. Based on OpenCV it does it very well.