How to write custom calibration parameters to intel Realsense D435 camera - opencv

I'm working with Intelrealsense D435 and what I plan to do is to overlay an image on top of the point cloud using the opencv_pointcloud_viewer.py example in the python wrapper.
First of all, I've done calibration using checkerboard and clearly there's a difference between the values of intrinsic parameters using the calibration process and the default found using depth_profile.get_intrinsics() function.
I was wondering if there's a way to change the values of intrinsic parameters corresponding to the depth_frame in my python code. I mean I want to replace the default intrinsic values with the values I found myself so that next time I use depth_profile.get_intrinsics(), I can get the same values as the ones found during the checkerboard calibration.
Thanks for your help in advance.
I have seen https://community.intel.com/t5/Items-with-no-label/How-to-save-load-intrinsics-to-from-file-python/td-p/435336 but that doesn't seem to solve my problem.

I found intel realsense dynamic calibrator tool that could be used for this purpose. It's possible to define the custom XML file including the intrinsic parameters for all sensors and write them to the device.

Related

Opencv fisheye undistort without checker board approach

I am looking if there is any way we can identify the camera calibration matrix and others to undistort a fisheye image without the checkerboard approach in OpenCV. This is because my current setup doesn't allow me to take some sample pictures out of it. So required an alternative approach. Preferably using OpenCV (Python). Thanks in advance.
Recently there are some AI approches using a single image. E.g. https://github.com/alexvbogdan/DeepCalib.
But I never tried and don't know if it works.

Finding Intrinsic parameters of a Digital Camera

What is the best way to find the intrinsic parameters of a Digital Camera?
I have tried using the opencv Camera Calibration tool but it gives me different values after every calibration. So Im not sure if it is correct or not. Whereas the same method works perfectly fine for a USB camera.
I am not sure if the process to get intrinsic parameters for digital camera is slightly different and if im missing something.
If someone has any clue regarding this... please help!
Changes in ISO and white balance happen "behind" the lens+sensor combo, so they cannot directly affect the calibrated geometry model of the camera.
However, they may affect the results of a calibration procedure by biasing the locations of the detected features on whatever rig you are using ("checkerboard" or similar) - anything that affects the appearance of grayscale gradients can potentially do that.
The general rule is: "Find the lens and exposure settings that work for your application (not the calibration), then lock them down and disable any and all auto-gizmo features that may modify them". If those settings don't work for the particular calibration setup you are using, change it (for example, dim the lights or switch the target pattern to gray/black from white/black if the application lighting causes the target to be overexposed/saturated).

How to do character matching in OpenCV

I am trying to develop a character matching application which will take an image from a camera and match it with a provided image template. So far I have tried matchShapes of contours which is not working correctly on characters, it's working fine for simple shapes. I tried using matchTemplate but that's also not working correctly if I change size, font or rotation of character written in image captured from camera and try matching it with template image.
I am now thinking I need to do feature extraction after segmenting the camera image in sets and compare these sets with a feature set of reference images. Can anyone please give me a starting off direction or suggestion.
For example, this is an image from camera
and I need to find a template image
I must stress that I am no expert at optical character recognition so please do a thorough research on your end as well. Following are two links that might help you achieve your goal using character feature sets:
http://blog.damiles.com/2008/11/basic-ocr-in-opencv/
http://opencv-python-tutroals.readthedocs.org/en/latest/py_tutorials/py_ml/py_knn/py_knn_opencv/py_knn_opencv.html

Change value of the camera focal in pixels

I am currently looking for a proper solution to the following problem, which is not directly programming oriented, but I am guessing that the users of opencv might have an idea:
My stereo camera has a sensor of 1/3.2" 752x480 resolution. I am using the two stereo images of this very camera in order to create a point cloud, thanks to the point cloud library (PCL).
The problem is that I would like to reduce the number of points contained by the point cloud, by directly lowering the resolution of the input images (passing from 752x480 to 376x240).
As it is indicated in the title, I have to adapt the focal of the camera in pixels to this need:
I calculate this very parameter thanks to the following formula:
float focal_pixel = (FOCAL_METERS / SENSOR_WIDTH_METERS)*InputImg.cols;
However the SENSOR_WIDTH_METERS is currently constant and corresponds to the 1/3.2" data converted to meters AND I would like to adapt this to the resolution that I would like to have: 376x240.
I am absolutly not sure if I turned my problem clearly enough to be answered, which would mean that I am going in the wrong direction.
Thank you in advance
edit: the function used to process the stereo image (after computing):
getPointCloud(hori_c_pp, vert_c_pp, focal_pixel, BASELINE_METERS, out_stereo_cloud, ref_texture);
where the two first parameters are just the coordinates of the center of the image, BASELINE_METERS the baseline of my camera out_stereo_cloud my output cloud and eventually ref_texture the color information. This function is taken from the sub library stereo_matching.
For some reason, if I just resize the stereo images, it seems to enter in conflict with the focal_pixel parameters, since the dimension are not the same anymore.
Im very lost on this issue.
As I don't really follow the formulas and method calls you're posting I advise you to use another approach.
OpenCV already gives you the possibility to create voxels using stereo images with the method cv::reprojectImageTo3D. Another question also already discusses the conversion to the according PCL datatype.
If you only want to reproject a certain ROI of your image you should opt for cv::perspectiveTransform as is explained in the documentation I pointed out in the first link.

How do I detect small blobs using EmguCV?

I'm trying to track the position of a robot from an overhead webcam. However, as I don't have much access to the robot or the environment, so I have been working with snapshots from the webcam.
The robot has 5 bright LEDs positioned strategically which are a different enough color from the robot and the environment so as to easily isolate.
I have been able to do just that using EmguCV, resulting in a binary image like the one below. My question is now, how to I get the positions of the five blobs and use those positions to determine the position and orientation of the robot?
I have been experimenting with the Emgu.CV.VideoSurveillance.BlobTrackerAuto class, but it stubbornly refuses to detect the blobs in the above image. Being a bit of a newbie when it comes to any of this, I'm not sure what I could be doing wrong.
So what would be the best method of obtaining the positions of the blobs in the above image?
I can't tell you how to do it with emgucv in particular, you'd need to translate the calls from opencv to emgucv. You'd use cv::findContours to get the blobs and cv::moments to get the position of the blobs (the formula to get the middle points of the blobs is in the documentation of cv::moments). Then you'd use cv::estimateRigidTransform to get the position and orientation of the robot.
I use cvBlob library to work blobs. Yesterday i worked with it to detect small blobs and works fine.
I wrote a python module to do this very thing.
http://letsmakerobots.com/node/38883#comments

Resources