I have some 3D scan images of the foot. How can I detect the foot length automatically from that image? I have the following information from the images only:
Toes
Metatarsals
Heel
weight
sex
Related
This question is related to my final project. In gazebo simulation environment, I am trying to detect obstacles' colors and calculate the distance between robot and obstacles. I am currently identifying their colors with the help of OpenCV methods (object with boundary box) but I don't know how can i calculate their distances between robot. I have my robot's position. I will not use stereo. I know the size of the obstacles. Waiting for your suggestions and ideas. Thank you!
My robot's topics :
cameras/camera/camera_info (Type: sensor_msgs/CameraInfo)
cameras/camera/image_raw (Type: sensor_msgs/Image)
sensors/lidars/points (Type: sensor_msgs/PointCloud2)
You can project the point cloud into image space, e.g., with OpenCV (as in here). That way, you can filter all points that are within the bounding box in the image space. Of course, projection errors because of differences between both sensors need to be addressed, e.g., by removing the lower and upper quartile of points regarding the distance to the LiDAR sensor. You can use the remaining points to estimate the distance, eventually.
We have such a system running and it works just fine.
I have images of rocks of varied shape and texture, obtained by laser scanning (there are 300 images obtained by rotation). This means that the rock is effectively scanned in 360 degrees with slight rotation recorded by each in each image. There are 6 marks on the rock, circles, crosses, T, X's, and their combinations (an example is in the picture below). I want to automatically detect these shapes given a set of images (on at least 1/300 of them).
I've tried template matching with OpenCV, but I haven't had success. Do you know any methods or libraries that could solve this problem? I'm thinking about training a neural network for this.Example
In a project, we are creating a virtual tour of an apartment. We want to display the room dimension in that virtual image. So far we are using RICO theta v to create the virtual tour. One example is given below.
The first image shows a panoramic view of the room. Now using Lidar we want to measure the room length and width. My question is: is there any way where i could attach this Lidar information to the image that I got from the RICO. so that the user can measure the distance from the picture or we can display the length and width of the room.
so, in short, I want to know:
1. What could be the possible solution to modify the image based on Lidar output?
2. Is there any way where I could find room dimensions using Lidar output?
I will be so glad if you give me some ideas.
The LIDAR sensor outputs a pointcloud, which is a 3D representation of your room. Every point in the 3D pointcloud represents a small point in the room, and the distances between points are distances between real objects.
Therefore, you would only need to know which points correspond to the corners of the room, and then you could measure the distance between them and compute the area. There can be some options of automatically detecting corners in the pointcloud, some of which are suggested here: How to find corner points of any object in point cloud and find distance between corner points in inch/cm/m?
The problem is that this is not as easy to correlate with an image. One approach, assuming a static setup, would be to manually align the pointcloud with the image.
Also, as there are approaches for automatic corner detection in the pointcloud, there are some options for automatic corner detection in images, such as the Harris corner detector.
Of course, all these methods will be prone to detecting all corners in the image, so some heuristics for filtering them might be needed.
I want to develop a workflow to detect a specific disease in potato plants by remote sensing.
I have acquired the images of the potato field by mounting a multispectral camera on a drone that flew at an altitude of 5m above the plants.
The multispectral camera has 5 bands namely: Blue,Green,Red,NIR and RedEdge.
I have converted the DN(raw digital number) values of all bands in reflectance values.
I have first trained SVM to segment soil from plants and then also applied SAVI(soil adjusted vegetation index) to refine soil segmentation from plants.
Now, I want to apply NDVI(normalized difference vegetation index) to determine the heath of plants pixel wise.
Is it the right approach to follow? Will NDVI be resonable to apply on images taken at just 5m height? Or is there any better approach?
Best regards...
Assuming that the vegetation mask you created is doing a good job of distinguishing between non vegetation and vegetation in the scene, I recommend that you create a vector file such as points or lines which correspond to individual potato plants or rows of plants in your scene. Once you have done that, you can buffer the geometry you have created in order to facilitate the calculation of zonal statistics within each polygon. If you tabulate the mean for a given layer in your multispectral raster file, that will give you the mean reflectance for a given band on a per plant or per row basis.
Because the reflectance values for the raw bands from multispectral sensors are sensitive to the lighting conditions (incident light) of the scene, it is more repeatable to use ratios of these bands such as NDVI to predict plant vigor (or stress in the case of diseased individuals).
I have developed a workflow that might help you extract the data you want:
http://blogs.oregonstate.edu/geog566spatialstatistics/2017/06/01/processing-multispectral-imagery-unmanned-aerial-vehicle-phenotyping-seedlings-common-garden-boxes-part-1/
http://blogs.oregonstate.edu/geog566spatialstatistics/2017/06/02/processing-multispectral-imagery-unmanned-aerial-vehicle-phenotyping-seedlings-common-garden-boxes-part-2/
In an iPad app, we would like to help users visualize the width of an object with their camera in the style of an augmented reality app.
For instance, if we want to help someone visualize the width of a bookshelf against a wall, how could we do that? Are there algorithms to estimate width (i.e., if you're standing 5 feet away and pointing your camera at the wall, 200 pixels in the camera will represent X inches)?
Any good resources to start looking?
To do this you may wish to do a bit of research yourself as this will vary depending upon the camera being used, resolution and it's depth of field.
I would recommend taking a large strip of paper (wallpaper would work fine) and writing measurements at specified intervals, e.g. you could write distance markers for each foot on the paper. Then all you need to do is stand at varying distances from the wall with the paper mounted to it, and take photographs, you should then be able to establish the correlation of how distance, resolution and measurements correlate to each other and use these findings to form your own algorithm. You've essentially answered your own question.