Convert point cloud from pointcloud2 (rosbag) to bin (KITTI) - ros

How can I convert a point cloud saved in rosbag, in format sensor_msgs/PointCloud2, to .bin files in KITTI format?
I know that it is possible to convert to .pcd (http://wiki.ros.org/pcl_ros#pointcloud_to_pcd) so perhaps even a pcd to bin converter would be enough.
Is there any available tool to do this?
I've found this, but it needs ROS kinetic (legacy ROS version).

A python script to do it:
pc = pypcd.PointCloud.from_msg(msg)
x = pc.pc_data['x']
y = pc.pc_data['y']
z = pc.pc_data['z']
intensity = pc.pc_data['intensity']
arr = np.zeros(x.shape[0] + y.shape[0] + z.shape[0] + intensity.shape[0], dtype=np.float32)
arr[::4] = x
arr[1::4] = y
arr[2::4] = z
arr[3::4] = intensity
arr.astype('float32').tofile('filename.bin')
Where x,y,z and intensity are arrays for a single point cloud. It's not strictly needed to use pypcd. (Source)
Also this conversion tool can actually be used without ROS, using another tool for the conversion to pcd file.

Related

Texture transformation

I am working on eigen transformation - texture to detect object from an image. This work was published in ACCV 2006 page number 71. Full pdf is available on chapter-3 in this pdf https://www.diva-portal.org/smash/get/diva2:275069/FULLTEXT01.pdf. I am not able to follow after getting texture descriptors.
I am working on the attached image. The image size is 9541440.
I took image patches of 3232 and for every patch calculated eigenvalues and got the texture descriptor. After that what to do with these texture descriptors is what I am not able to follow.
Any help to unblock will be really appreciated. Code looks for calculating descriptors looks like below:
descriptors = np.zeros((gray.shape[0]//w, gray.shape[1]//w))
w = 32
for i in range(gray.shape[0]//w):
temp = []
for j in range(gray.shape[1]//w):
sorted_eigen = -np.sort(-np.linalg.eigvals(gray[i*w:
(i+1)*w,j*w:(j+1)*w]))
l = i*w + 13
k = (i+1)*w
theta_svd = (1/(k-l+1))* np.sum([np.abs(val) for val in s[l:k]])
descriptors[i,j] = theta_svd

Writing Dask/XArray to NetCDF - Parallel IO

I am using Dask/Xarray with a ~150 GB dataset on a distributed cluster on a HPC system. I have the computation component complete, which takes about ~30 minutes. I want to save the final result to a NETCDF4 file, but writing the data to a NETCDF file is quite slow (~3hrs) and seems to not run in parallel. It is unclear to me if the "to_netcdf" function in Xarray is supposed to support parallel writes. Currently my approach is to write an empty netcdf file with NetCDF4 and then append the data from the Xarray:
f_mosaic = 't1.nc'
meta = {'width': dat_f.shape[1],
'height': dat_f.shape[2],
'crs': rasterio.crs.CRS(init='epsg:'+fi['CPER']['Reflectance']['Metadata']['Coordinate_System']['EPSG Code'].value.decode("utf-8")),
'transform': aff_final,
'count': dat_f.shape[0]}
with netCDF4.Dataset(f_mosaic, mode='w', format="NETCDF4") as t1:
# Create spatial dimensions
y = t1.createDimension('y', meta['width'])
x = t1.createDimension('x', meta['height'])
wl_dim = t1.createDimension('wl',meta['count'])
reflectance = t1.createVariable("reflectance","int16",("wl","y","x",),fill_value=null_val,zlib=True)
reflectance.setncattr('grid_mapping', 'crs')
crs = t1.createVariable('crs', 'c')
crs.spatial_ref = meta['crs'].wkt
crs.epsg_code = meta['crs'].to_string()
crs.GeoTransform = " ".join(str(x) for x in meta['transform'].to_gdal())
dat_f.to_netcdf(path=f_mosaic,mode='a',format='NETCDF4',encoding={'reflectance':{'zlib':True}})
Overall, the question is, how can I write this data to a NETCDF4 file quickly? Does dask/Xarray support parallel writes with NETCDF4? If so, what am I doing incorrectly?
Thanks!

OpenCV : how to provide matrix for 'undistort' if I know lens correction factor in Gimp?

I'm getting images from an IP camera that have a strong fish-eye effect. I found that in Gimp I can get lines mostly straight by applying the Lens Distortion filter with a "main" value of -30 (all other parameters remain zero).
Now I need to do this ad-hoc using OpenCV. I gathered that the undistort function in imgproc would be the right thing to call. But how do I generate the correct camera and distortion matrix? I see there is a calibrateCamera function, but it seem you need a PhD in computer vision or so to use it. I have no clue. Since I know the one parameter, there must be a simple way to translate it into the matrix expected by 'undistort'?
Note: I only need the radial distortion coefficients, I'm not interested in the tangential distortion.
There is a sample provided by opencv for calibration. For that all you need is the list of the images of checkerboard(around 20 should be good). taken by your desired camera. It will give you all the required parameters( distortion coefficients, intrinsic parameters etc.). Then you can use 'undistort' function of opencv to correct your image.
You need to change in default.xml,(or you can create your own .xml) the name of the xml file containing the address of your images, the count of inner squares and their dimension in real world.
tadaa you have you required parameters :-)
For those who wonder where that calibration tool comes from. Seems one has to build it from source. This is what I did on Linux:
git clone https://github.com/opencv/opencv.git
cd opencv
git checkout -b 3.1.0 3.1.0 # make sure we build that version
mkdir build
cd build
cmake -D CMAKE_BUILD_TYPE=Release -D BUILD_EXAMPLES=ON ..
make -j4
Then to calibrate:
./bin/cpp-example-calibration -w=8 -h=6 -o=camera.yml -op -oe -su image_list.xml
The -su lets you verify how the images look after un-distortion. The -w and -h parameters take "inner corners" which is not the number of squares in the chess pattern, but rather (num-black-squares - 1) * 2.
Here's how the perspective transform is applied in the end, using Scala and JavaCV:
import org.bytedeco.javacpp.indexer.FloatRawIndexer
import org.bytedeco.javacpp.opencv_core.Mat
import org.bytedeco.javacpp.{opencv_core, opencv_imgcodecs, opencv_imgproc}
import java.io.File
// from the camera_matrix > data part of the yml:
val cameraFocal = 1.4656877976320607e+03
val cameraCX = 1920.0/2
val cameraCY = 1080.0/2
val cameraMatrixData = Array[Double](
cameraFocal, 0.0 , cameraCX,
0.0 , cameraFocal, cameraCY,
0.0 , 0.0 , 1.0
)
// from the distortion_coefficients of the yml:
val distMatrixData = Array[Double](
-4.016824381742e-01, 4.368842493074e-02, 0.0, 0.0, 1.096412142704e-01
)
def run(in: File, out: File): Unit = {
val matOut = new Mat
val camMat = new Mat(3, 3, opencv_core.CV_32FC1)
val camIdx = camMat.createIndexer[FloatRawIndexer]
for (row <- 0 until 3) {
for (col <- 0 until 3) {
camIdx.put(row, col, cameraMatrixData(row * 3 + col).toFloat)
}
}
val distVec = new Mat(1, 5, opencv_core.CV_32FC1)
val distIdx = distVec.createIndexer[FloatRawIndexer]
for (col <- 0 until 5) {
distIdx.put(0, col, distMatrixData(col).toFloat)
}
val matIn = opencv_imgcodecs.imread(in.getPath)
opencv_imgproc.undistort(matIn, matOut, camMat, distVec)
opencv_imgcodecs.imwrite(out.getPath, matOut)
}

Invalid indexing operation error when trying to draw epipolar lines

I'm creating Stereo images processing project modeled on Matlab's examples. A copy pasted code from one of them don't works well.
I1 = rgb2gray(imread('viprectification_deskLeft.png'));
I2 = rgb2gray(imread('viprectification_deskRight.png'));
points1 = detectHarrisFeatures(I1);
points2 = detectHarrisFeatures(I2);
[features1, valid_points1] = extractFeatures(I1, points1);
[features2, valid_points2] = extractFeatures(I2, points2);
indexPairs = matchFeatures(features1, features2);
matchedPoints1 = valid_points1(indexPairs(:, 1),:);
matchedPoints2 = valid_points2(indexPairs(:, 2),:);
figure; showMatchedFeatures(I1, I2, matchedPoints1, matchedPoints2);
load stereoPointPairs
[fLMedS, inliers] = estimateFundamentalMatrix(matchedPoints1,matchedPoints2,'NumTrials',4000);
figure;
subplot(121); imshow(I1);
title('Inliers and Epipolar Lines in First Image'); hold on;
plot(matchedPoints1(inliers,1), matchedPoints1(inliers,2), 'go');
An error:
Error using epilineTest (line 24) Invalid indexing operation.
Best regards
Looks like you have an older version of MATLAB. Try doing this:
[fLMedS, inliers] = estimateFundamentalMatrix(...
matchedPoints1.Location, matchedPoints2.Location,'NumTrials',4000);
Generally, look at the example in your own local MATLAB documentation, rather than the one on the website. The website has the doc for the latest release (currently R2014a), and the examples may be using new features that do not exist in the older versions.

Blocproc in matlab with two output variables

I have the following problem. I have to compute dense SIFT interest points in a very high dimensional image (182MP). When I run the code in the full image Matlab always close suddently. So I decided to run the code in image patches.
the code
I tried to use blocproc in matlab to call the c++ function that performs the dense sift interest points detection this way:
fun = #(block_struct) denseSIFT(block_struct.data, options);
[dsift , infodsift] = blockproc(ndvi,[1000 1000],fun);
where dsift is the sift descriptors (vectors) and infodsift has the information of the interest points, such as the x and y coordinates.
the problem
The problem is the fact that blocproc just allow one output, but i want both outputs. The following error is given by matlab when i run the code.
Error using blockproc
Too many output arguments.
Is there a way for me doing this?
Would it be a problem for you to "hard code" a version of blockproc?
Assuming for a moment that you can divide your image into NxM smaller images, you could loop around as follows:
bigImage = someFunction();
sz = size(bigImage);
smallSize = sz ./ [N M];
dsift = cell(N,M);
infodsift = cell(N,M);
for ii = 1:N
for jj = 1:M
smallImage = bigImage((ii-1)*smallSize(1) + (1:smallSize(1)), (jj-1)*smallSize(2) + (1:smallSize(2));
[dsift{ii,jj} infodsift{ii,jj}] = denseSIFT(smallImage, options);
end
end
The results will then be in the two cell arrays. No real need to pre-allocate, but it's tidier if you do. If the individual matrices are the same size, you can convert into a single large matrix with
dsiftFull = cell2mat(dsift);
Almost magic. This won't work if your matrices are different sizes - but then, if they are, I'm not sure you would even want to put them all in a single one (unless you decide to horzcat them).
If you do decide you want a list of "all the colums as a giant matrix", then you can do
giantMatrix = [dsift{:}];
This will return a matrix with (in your example) 128 rows, and as many columns as there were "interest points" found. It's shorthand for
giantMatrix = [dsift{1,1} dsift{2,1} dsift{3,1} ... dsift{N,M}];

Resources