How can i show satellite image as layer? in which format should be this image?
Hi there!
I have satellite images like RapidEye.
I created local openstreetmap layer like This example
And now i want to add multiple layers like RGB, NDVI, NDWI.
Give me tips pls,
Thank you
Use a wms server to publish your image as a layer. This can be done using geoserver or mapserver for example.
For the format of the image, depends on the wms server you are going to use. Geoserver accepts a large number of different formats details may be found here.
Here is also a description on how to publish a georefernced image on geoserver.
For other wms servers I am not so familiar. But more or less the process should be the same.
Related
I am using Drake to implement a visual servoing scheme. For that I need to process color images and extract features.
I am using the ManipulationStation class, and I have already published the color images of the rgbd_sensor objects to LCM channels (in c++). Now I want to process the images during the simulation.
I know that it would be best to process images internally (without using the ImageWriter), and for that, I cannot use the image_array_t on LCM channels or ImageRgbaU8, I have to convert images to an Eigen or OpenCV type.
I will then use feature extraction functions in these libraries. These features will be used in a control law.
Do you have any examples on how to write a system or code in c++ that could convert my Drake images to OpenCV or Eigen types? What is the best way to do it?
Thanks in advance for the help!
Arnaud
There's no current example converting the drake::Image to Eigen or OpenCV but it should be pretty straightforward.
What you should do is create a system akin to ImageWriter. It takes an image as an input and does whatever processing you need on the image. You connect its input port to the RgbdSensor's output port (again, like ImageWriter.) The only difference, instead of taking the image and converting it to an lcm data type, you convert it to your OpenCV or Eigen type and apply your logic to that.
If you look, ImageWriter declares a periodic publish event. Change that to (probably) a discrete update event (still periodic, I'd imagine) and then change ImageWriter::WriteImage callback into your "Image-to-OpenCV" callback.
Finally, you could also declare an output port that would make your converted image to other systems.
I have searched a lot to know how can we test an object exist in an image. I am searching for the name of the scientific/ technology that can provide this. As an example I can mention Instagram where you upload an image and Instagram writes: This image may contain sea, people, car. Is this content based image retrieval? Do I need local feature extraction for it? Are they based on deep learning or do they work by something like SIFT?
Whatever I studied was just able to receive a query image and search a database to say that which image is "similar" to that, not which image contains it.
Yeah it uses the technique of deep learning where they train their model to recognize number of objects in an image using either bounding box approach or multilabel classification. If a new image is passed to the model, it'll predict label of all the objects present in that image.
This is known as object detection.
I am working on image classification problem. How to find out specific features from the image manually that will help to build a DNN? Consider an image of a man talking on phone while driving for classification as distracted.
You don't do this. Having a good feature extractor is why we take DNNs in the first place
Also: you forgot to look to https://www.kaggle.com/c/state-farm-distracted-driver-detection
I have a sequence of DICOM images constituting a single scan. I would like to build a CGAL mesh representing 3D volume segmented out of that scan by thresholding. I prefer Windows and few, easy to build dependencies, if any.
I've heard that ITK can be used for this purpose, but it is a large library with a lot of overlap with CGAL. Are there any other options?
The example CGAL-4.9/examples/Mesh_3/mesh_3D_gray_vtk_image.cpp should be a good starting point. As this is not easy to find we will add a link to it in the CGAL User manual, see the pull request on github
So I am working on a project for school and what we are trying to do is to teach a neural network to recognize buildings from non-buildings. The problem I am having right now is representing the data in a form, that would be "readable" by the classifier function.
The training data is a bunch of pictures + .wkt file with coordinates of buildings on a picture. So far we have been able to rescale the polygons, but kinda got stuck there.
Can you give any hints or ideas of how to bring this all to an appropriate form?
Edit: I do not need the code written for me, a link to an article on a similar subject or a book is more of stuff I am looking for.
You did not mention what framework you are using, but I will give an answer for caffe.
Your problem is very close to detecting objects within an image. You have full images with object (building in your case) bounding boxes.
The easiest way of doing this is through a python data layer which reads an image and a file with stored coordinates for that image and feeds that into your network. A tutorial on how to use it can be found here: https://github.com/NVIDIA/DIGITS/tree/master/examples/python-layer
To accelerate the process you may want to store image, coordinate pairs in your custom lmdb database.
Finally a good working example with complete caffe implementation can be found within Faster-RCNN library here: https://github.com/rbgirshick/caffe-fast-rcnn/
You should check roi_pooling_layer.cpp in their custom caffe branch and roi_data_layer on how the data is fed into the network.