Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Please, help me to write Kinect program to recognize objects and their 3D coordinates. And measure their distance between each other. Which library to use, technology and etc.?
Rather vague question but I suggest you read this topic for starters, it's a similar question.
How to get real world coordinates (x, y, z) from a distinct object using a Kinect
Also look at OpenCV, it's a library that can work together with Kinect and can process shapes, recognize objects etc.
I can recommend PCL, which can be found at http://pointclouds.org/.
It supports template matching, image smoothing, etc. They also provide multiple tutorials and with a bit of searching, you should be able to find some implementations of a kinect based scanner.
Related
Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 days ago.
Improve this question
I am new to music genre recognition and I am trying to do a project which classifies which genre a certain music clip is(I am using GTZAN).
I've came across an open source code in kaggle and I have used his preprocessing
(link)
I saw them using MFCC, but I need to understand why they are using these values(For example why 13 coefs?)
Moreover, I need to understand what is MFCC, I have basic knowledge of physics but it is very difficult to understand what it represents, and why he chose these values(I don't need to understand broad physics behind it, just as simple as possible please).
Another question,
MFCC image example
for example, the X axis here represents the time, but what the squares, or the Coefs represents in Y axis?
I have tried to search in the internet, but there are a lot of physics and music theory behind it, I need a simple explanation.
Thanks.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I am struggling to find the difference between the two concepts. From what I understand both refer to turning raw data into more comprehensive features to describe the problem at hand. Are they the same thing? If not could anyone please provide examples for both?
Feature extraction is usually used when the original data was very different. In particular when you could not have used the raw data.
E.g. original data were images. You extract the redness value, or a description of the shape of an object in the image. It's lossy, but at least you get some result now.
Feature engineering is the careful preprocessing into more meaningful features, even if you could have used the old data.
E.g. instead of using variables x, y, z you decide to use log(x)-sqrt(y)*z instead, because your engineering knowledge tells you that this derived quantity is more meaningful to solve your problem. You get better results than without.
Feature engineering - is transforming raw data into features/attributes that better represent the underlying structure of your data, usually done by domain experts.
Feature Extraction - is transforming raw data into the desired form.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Hi i want to predict health level(High,medium,low) in leaf using image processing and data mining.So far i thought using extract colors from leaf using Bayes algorithm to predict healthy of leaf. and data mining part have completed now.but i need extra features for prediction.we only used orchid leaf.So i can't use vain structure.Can anyone help me to what are the other features can be extracted from leaf for identify health level of leaf.Any idea or comments help me to improve my project. Thanks
There are many possible approaches to a problem like this. One common method is the bag-of-features model. Take a look at this example using the Computer Vision System Toolbox in MATLAB.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I wanna experiment k-means clustering method on different kind of images, so I am trying to find different kind of images used in image segmentation such as MRI images.I want to gather some more categories.
Any suggestion would be gratefully appreciated.
Although this is not the correct place for asking your question, to help you ,Image segmentation has a wide range of application including segmenting Satellite imagery
and Medical Imaging images, Texture Recognition, Facial Recognition System, Automatic Number Plate Recognition, and a lot of other machine vision applications.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I am creating an iPad app for golfers, I will get their score card image as below. I want to calculate sum of scores for each person by scanning this image.
Is there any way. Please give me a logic for this.
I have used opencv http://opencv.org in the past to do something similar but with sudoku puzzles.
It is a LOT of work and making it work with handwriting will add to the difficulty.
I found a really good resource for analysing sudoku grids. I'll try to find it again but it was 4 years ago.
Good luck though.
There is a Tesseract port for iOS which is about the best OCR you're likely to get on the platform without either:
A) Porting another OCR library
or
B) Shipping the images off to an online OCR service
To make this more complex, you don't just want to OCR but you want to OCR handwriting and put it into a grid. This is not something that can be done overnight but is in fact rather complex. Not impossible, but complex.
Would it not just be easier to let the players enter there scores straight onto the app and then airprint a score card?