I'm new to this .. I am working on some mapping project and I need help filtering LIDAR pointcloud data. I have two questions and i hope you can help in this:
1) Can you use Veloview to filter Point Cloud data? Can you crop the data captured? if so, what's the process?
2) Can you export/convert PCAP file in .csv?
Thanks
Have a look at VeloView User Guide
For manipulation and analysis purpose, you can use Veloview-python console interface mentioned in section III - Python Console.
For cropping the captured data:
File --> Save As --> [preferred option] --> Frames from.
Related
As I am working on my project that is to detect FOD (Foreign Object Debirs) that is found on the runway. FOD include anything like nuts, bolts, screws, locking wires, plastic debris, stones etc. that has the potential to cause damage to the aircraft. Now I have searched on the Internet to find any image dataset but no dataset is available related to FOD. Now my question is kindly guide me that how can I make my own dataset of images that can then be used for training purpose.
Kindly guide me in making image dataset for both classification and detection purposes. And also the data pre-processing that will be required. Thanks and waiting for the reply!
Although the question is a bit vague regarding your requirements and the specs of your machine, I'll try to answer it. You'll need object detection to do your task. There are many models available which you can use like Yolo, SSD, etc..
To create your own dataset, you can follow these steps:
Take lots of images of your objects of interest in various conditions, viewpoints and backgrounds. (Around 2000 per class should be good enough).
Now annotate (or mark) where your object is in the image. If you're using Yolo, make use of Yolo-mark for annotating. There should be other similar tools for SSD and other models.
Now you can begin training.
These steps should get you started or at least point you in the right direction.
You can build your own dataset with this code. I wrote it, and it works correctly.
You need to import the libraries and add your DATADIR.
if __name__ == "__main__":
for category in CATEGORIES:
path = os.path.join(DATADIR, category)
class_num = CATEGORIES.index(category)
for img in os.listdir(path):
try:
img_array = cv2.imread(os.path.join(path,img))
new_array = cv2.resize(img_array, (IMG_SIZE, IMG_SIZE))
training_data.append([new_array, class_num])
except Exception as e:
pass
for features, label in training_data:
x_train.append(features)
y_train.append(label)
#create pikle
pickle_out = open("x_train.pickle", "wb")
pickle.dump(x_train, pickle_out)
pickle_out.close()
pickle_out = open("y_train.pickle", "wb")
pickle.dump(y_train, pickle_out)
pickle_out.close()
In case if you're starting completely from scratch, you can use "Dataset Directory", available on Play store. The App helps you in creating custom datasets using your mobile. You'll have to sign in to your Google drive such that your dataset is stored in Drive rather on your mobile. Additionally, It also contains Labelling the entity for classification and Regression predictive models.
Currently, the App supports Binary Image Classification and Image Regression.
Hope this Helped!
Download Link :
https://play.google.com/store/apps/details?id=com.applaud.datasetdirectory
I want to use NiftyNet to implement Deep Learning on medical image processing. However, there is one thing I haven't figured out regarding the data input: how does it join the multi-modality images? I saw the demo of BRATS2017, they seems to use 4 different modalities, and in the configuration file, they just included the directory of the images and they claim it will "concatenate" the images. But I want to know more, as those images are 3D, how are they concatenated? [slice1-30]:[slice1-30].. or [slice1, slice1, slice1 ...]:[slice2, slice2, slice2...]?
And can we control the data organization part? If so, which file should I modify?
Any suggestion would be greatly appreciated!
In this case, the 3D images are concatenated in an additional dimension. You control the order they're concatenated in by specifying the order of files to load in the *.ini files.
However, as long as you're consistent, it shouldn't matter what order the modalities go in.
The images are concatenated in the channel dimension. For 2D images, the dimensions are NSSC: batch size, 2 spatial dimensions, then channel. For 3D images, the dimensions are NSSSC: batch size, 3 spatial dimensions, then channel.
I've been attempting to export boundary information from an OSM file. My process is nearly there however I have an issue with the polygon I'm generating drawing random lines.
I would appreciate some insight on where I may be going wrong.
Step 1: Export the OSM data into XML
osmfilter -v greater-london-latest.osm --keep="boundary= admin_level= place=" > b.txt
Step 2: Run a script to process the XML.
cycle each relation node
load the member ways
load the nodes from each specified way
record the lat/lon and build a poly set
This produces a series of lat/lon which when I build them as a polygon give the correct overall shape I'm looking for. However, there are issues with the connecting lines I assume..
My polygon output
I'm actually looking for this, which is similar but Im obviously missing something.
Actual Poly Im looking to generate
Again, thanks for any help.
Ways in relations are not necessarily sorted. See answers to this question on how to sort ways, especially the answer by user geocodezip.
Alternatively you can make use of various tools/libraries to do the sorting for you. Unfortunately I can't point you directly to one but there are various tools capable of sorting relation members, including the OSM website itself, JOSM, overpass turbo (I guess), some JS stuff, [...].
Maybe some other user can help out with pointing to some good examples?
do you know anybody if it is possible to get some model file from doctor when he made 3d ultrasound of pregnant woman? I mean something like DICOM (.dcm) file or .stl file or something like that what I can then work with and finaly print with 3D printer.
Thanks a lot.
Quick search for "dicom 3d ultrasound sample" resulted in one that you might be able to use for internal testing. You can get the file from here
Bonjour,
The first problem you will face is the file format.
Because of the way the images are generated, 3D ultrasound data have voxels that are expressed in a spherical system. DICOM (as it stand now) only support voxels in a Cartesian system.
So the manufacturers have a few choices:
They can save the data in proprietary format (ex: Kretzfile for Ge, MVL for Samsung).
They can save the data in private tags inside a DICOM file (Ge, Hitachi, Philips)
They can re-format the voxels to be in Cartesian, but then the data has been transformed and nobody like that. And anyway, since they also need to save the original (untransformed) data, the companies that do offer Cartesian voxels, usually save them in the same way as the original, so they are not saved in normal DICOM tags, but in their proprietary version.
This is why most of the standard software that can do 3D from CT or MR will not be able to cope with the data files.
Then the second problem is the noise. Ultrasound datasets are inherently very noisy! Again standard 3D reconstruction software where designed for CT or MR and have problems with this.
I do have a product that will read most of the 3D ultrasound files and create an STL model directly from the datasets (spherical or Cartesian). It is called baby SliceO (http://www.tomovision.com/products/baby_sliceo.html)
Unfortunately, it is not free, but you can try it without any licenses. Give it a try and let me know if you like it...
Yves
It this Mapbox blog post, Lauren Budorick shares how they got working a routing engine with OSRM that uses elevation data in order to give cyclists better routes... AMAZING!
I also want to explore the potential of OSRM's routing when plugging in external (user-generated) data, but I'm still having a hard time grasping how OSRM's profiles work. I think I get the main idea, that every way (or node?) is piped into a few functions that, all toghether, scores how good that path is.
But that's it, there are plenty of missing parts in my head, like what do each of the functions Lauren uses in her profile do. If anyone could point me to some more detailed information on how all of this works, you'd make my next week much, much easier :)
Also, in Lauren's post, inside source_function she loads a ./srtm_bayarea.asc file. What does that .asc file looks like? How would one generate a file like that one from, let's say, data stored in a pgsql database? Can we use some other format, like GeoJSON?
Then, when in segment_function she uses things like source.lon and target.lat, are those refered to the raw data stored in the asc file? Or is that file processed into some standard that maps everything to comply it?
As you can see, I'm a complete newbie on routing and maybe GIS in general, but I'd love to learn more about this standards and tools that circle around the OSRM ecosystem. Can you share some tips with me?
I think I get the main idea, that every way (or node?) is piped into a few functions that, all toghether, scores how good that path is.
Right, every way and every node are scored as they are read from an OSM dump to determine passability of a node and speed of a way (used as the scoring heuristic).
A basic description of the data format can be found here. As it reads, data immediately available in ArcInfo ASCII grids includes SRTM data. Currently plaintext ASCII grids are the only supported format. There are several great Python tools for GIS developers that may help in converting other data types to ASCII grids - check out rasterio, for example. Here's an example of a really simple python script to convert NED IMGs to ASCII grids:
import sys
import rasterio as rio
import numpy as np
args = sys.argv[1:]
with rio.drivers():
with rio.open(args[0]) as src:
elev = src.read()[0]
profile = src.profile
def shortify(x):
if x == profile['nodata']:
return -9999
elif x == np.finfo(x).tiny:
return 0
else:
return int(round(x))
out_elev = [map(shortify, row) for row in elev]
with open(args[0] + '.asc', 'a') as dst:
np.savetxt(dst, np.array(out_elev),fmt="%s",delimiter=" ")
source.lon and target.lat e.g: source and target are nodes provided as arguments by the extraction process. Their coordinates are used to look up data at each location during extraction.
Make sure to read thoroughly through the relevant wiki page (already linked).
Feel free alternately to open a Github issue in
https://github.com/Project-OSRM/osrm-backend/issues with OSRM
questions.