I am trying to replicate this paper here https://www.nature.com/articles/s41598-021-89779-z where they use satellite data to predict crop yield using machine learning. One of the steps they do is shown in this image here:
The authors turn a image collection into a frequency histogram by observing all of the pixel values in a select region. How would you code this using google earth engine in google colab?
Related
I am trying to analyze images using IDL, and I need to create an ROI. Specifically, I have hundreds of images, each containing 6 bands. Two of the bands contain the longitude and latitude information of each image. I want to create an ROI using set thresholds on latitude band. ENVI has a tool for creating ROIs using band thresholds but I don't have license for it. Is it possible to do it using IDL?
Thank you!
I want to search images using their histograms of colors. For extracting these histograms I will use OpenCV, I also found examples which describes how to compare two images using histograms of colors. But I have some issues:
Google and another search-engines uses these histograms for searching by image, but I do not think that they iteratively compare described image with images in the database (as it done in the OpenCV examples). So how can I implement quick image search using histograms?
Can I use for this purpose and another image searching purposes common RDBMS like MySQL?
I have the cluster centers (vectors) which I calculated using Kmeans. And I can calculate the feature vectors of an image and store it in a matrix. My problem is, how to get the histogram of occurrences of visual words of a particular image using the data I have. Im using OPENCV.
Can someone help please?
I am trying to develop an android application for paper currency identification by capturing images, I have tried template matching method but it is a scale variance, and it doesn't give an accurate match, I am thinking to use calculate Histogram method, will I get a better results?
Also, how can I classify currencies of different colors based on Hue channel ??
This seems a case where a recognition based on SIFT or SURF features can give you good results.
Extract SURF features from those images and build a FlannBasedMatcher (or other matcher). Then, extract SURF features from the input image, use the matcher to compute distances between the input features and those in your training images. Select corresponding features with lower descriptor distance and check if you have enough of them. If your input image has a lot of background, to check if your guess is correct, you can also compute a homography with those correspondences.
There is an example in the OpenCV doc to do something very similar to this.
I have a webcam mounted ~12 inches off a table facing down. I have a sheet of paper under it that can move in any direction, but only in a 2D plan on the table. I want to use the webcam to figure out in which direction the sheet of paper is moving. Is there an algorithm to do this? What is it called?
I suggest optical flow. From Wikipedia:
Optical flow is the pattern of apparent motion of objects, surfaces, and edges in a visual scene caused by the relative motion between an observer (an eye or a camera) and the scene
Or, to quote a presentation from the Stanford Artificial Intelligence Lab:
Given a set of points in an image, find those same points in another image.
It means you can compute the displacement of a set of points belonging to the object you want to track from one image to another -> resulting in a set of vectors describing the direction of your object.
Find good image features to track using cvGoodFeaturesToTrack() -- you should get good results as long as your sheet of paper is distinct from the table
Find its corners using cvFindCornerSubPix() and
Compute the optical flow using cvCalcOpticalFlowPyrLK() -- "LK" means "Lucas-Kanade", the name of the algorithm
See OpenCV's Motion Analysis and Object Tracking documentation for details.