I have some parallel contours, and I would like to connect them to form a continuous path. Does anyone have any idea or know any algorithm to do that? The attached image is an example of what I would like to achieve. example Many thanks
Related
I am currently working with the compass-gait example. I am trying to visualise a planned path for a robot. Is there a way to visualise a few lines dynamically during the simulation on the visualizer as the robot traces a path?
I am looking into publishing lcm messages directly from here, but I am finding it a little hard to get the right sequence. Is there a sub-module I can check to get a better idea about this?
I am currently stuck at my project of reconstructing an object from different images of the same object.
So far I have calculated the feature matches of each image using AKAZE features.
Now I need to derive the camera parameters and the 3D Point coordinates.
However, I am a little bit confused. In my opinion I need the camera parameters to determine the 3D Points and vice versa.
My question is how can I get the 3D points and the Camera parameters in one step?
I have also looked into the bundle adjustment approach provided by http://scipy-cookbook.readthedocs.io/items/bundle_adjustment.html but there you need an initial guess for camera parameters and 3D coordinates.
Can somebody refer to a pseudocode? or has a pipeline for me?
Thanks in advance
As you seem to be new at this, I strongly recommend you first play with an interactive tool to gain an intuition for the issues involved in solving for camera motion and structure. Try Blender: it's free, and you can find plenty of video tutorials on YouTube on how to use it for matchmoving: examples 1, 2
Take a look at VisualSFM (http://ccwu.me/vsfm/). It is an interactive tool for such tasks. It will give you an idea which algorithms to use.
The Computer Vison book of Richard Szeliski (http://szeliski.org/Book/) will give the theoretical background.
I am working on image classification problem. How to find out specific features from the image manually that will help to build a DNN? Consider an image of a man talking on phone while driving for classification as distracted.
You don't do this. Having a good feature extractor is why we take DNNs in the first place
Also: you forgot to look to https://www.kaggle.com/c/state-farm-distracted-driver-detection
I want to recognize nutrient information from package labels Sample Nutrient label . This is one package image, different brands may style/layout their labels differently. But I know some things for sure, layout would be somewhat tabular with certain key words in heading like 'Nutrient' as well as the content of the table will have certain common words, like Energy/Fat etc. I want to extract these values in text form and save it into my db.
The sample image is part of a bigger problem, finding the contour/box that might contain this section 'Nutrient Label'.
As I understand their are 3 broad steps.
Scan the input image (product front/back/side image) to look for the best contour that could be my target contour containing these nutrient information
Go to this contour and perform OCR (possibly retain the layout information and not output everything in 1 line)
scan the text and look for needed info.
I am a beginner in Image Recognition. it would be a great help,
If i could get a feedback on my approach. for instance should I look for text in Image or gather similar images and train a model and then do classification? similar to performing face recognition.
if someone has already solved this problem, it would be great to get some pointers (their is no fun reinventing the wheel).
If its a research problem, then relevant code/libraries/pointers/similar SO questions that I could refer to.
It would be highly appreciable if the answers are not general (like perform feature extraction, I would no clue what is feature extraction, instead a sample code pointer would be awesome.)
I thank you for your time and help.
thanks
Chahat
It would be required to collect at least 200-300 images for sufficient training.
2/3. I did solve the problem, but it was done using not a free solution, so I'm not supposed to give a directions here.
Could anybody suggest an automatic way to convert from a list of images (without Kinect) to a point cloud in opencv?
Take a look at OpenCV Contrib Structure From Motion module (SFM). There are two nice examples trajectory_reconstruction.cpp and scene_reconstruction.cpp.
Also, there is alternative called Multi-View Environment which you could find on GitHub at simonfuhrmann/mve and which might meet your criteria too.