I'm getting some trouble with hector slam mapping (cyglidar_d1) - ros

I really want to solve this problem.
This is my environment: jetson nano ubuntu 18.04 ros: melodic imu:
mpu6050 lidar: cyglidar_d1 and I got
transform from map to base_link failed this error
cyglidar_d1 launch file I will just use 2d so modify run_mode to 0. [1]
hector slam mapping_default file [2]base frame -> base_linkodom_frame -> base_link
<node pkg="tf" type="static_transform_publisher" name="base_to_laser_broadcaster" args="0 0 0 0 0 0 base_link laser_link 100"/>
<include file="$(find hector_imu_attitude_to_tf)/launch/example.launch"/>
hector slam tutorial sim time = false
tf_rqt_tree image [3] [fixed frame = map][4] [fixed frame = base_link][5]
rqt graph [6] I can see lidar scan_laser, imu data on rostopic list I want to use navigation after mapping. I'd appreciate it if you could
tell me which part I should edit here Thank you [1]:
https://i.stack.imgur.com/cailz.png [2]:
https://i.stack.imgur.com/xB9nb.png [3]:
https://i.stack.imgur.com/0aHSM.png [4]:
https://i.stack.imgur.com/AOyNn.png [5]:
https://i.stack.imgur.com/zX8i2.png [6]:
https://i.stack.imgur.com/eeWXU.png

Related

Correct tf frames setting in ndt_matching

ndt_matching succeeded in autoware, but the vehicle model cannot be set correctly.
How do I set the correct angle for the vehicle model?
What does the frame "mobility" mean?
tf.launch
<node pkg="tf" type="static_transform_publisher" name="world_to_map" args="0 0 0 0 0 0 /world /map 10" />
<node pkg="tf" type="static_transform_publisher" name="map_to_points_map" args="0 0 0 0 0 0 /map /points_map 10" />
<node pkg="tf" type="static_transform_publisher" name="velodyne_to_lidar_top" args="0 0 0 0 0 0 /velodyne /lidar_top 10" />
Image for RViz
Image for TF Tree
The settings in the TF file were correct.
To change the angle of the vehicle model, I made the following settings.
Change the yaw setting of Baselink to Localizer in the Setup tab (in the direction you want the vehicle model to point).
Set the yaw setting of ndt_matching to offset it.(if baselink angle(1) is -1.55, here it is +1.55)
I wrote an article about these issues, Thank you JWCS!
https://medium.com/yodayoda/localization-with-autoware-3e745f1dfe5d

Change video stream resolution in YoloV4 demo

Here's what shows when loading the live stream demo for Yolov4:
Webcam index: 2
[ WARN:0] global ../modules/videoio/src/cap_gstreamer.cpp (935) open OpenCV | GStreamer warning: Cannot query video position: status=0, value=-1, duration=-1
Video stream: 2304 x 1536
Objects:
Then it starts finding objects with 2 fps.
How do I change the video stream resolution to 1080p or 720p? The frame rate is very slow and this appears to be the fix.
Can't find it within the makefile or cfg folder. Any thoughts? Is this an opencv problem?
Thanks!
cfg settings:
[net]
batch=64
subdivisions=8
# Training
#width=512
#height=512
width=320
height=320
channels=3
momentum=0.949
decay=0.0005
angle=0
saturation = 1.5
exposure = 1.5
hue=.1
learning_rate=0.0013
burn_in=1000
max_batches = 500500
policy=steps
steps=400000,450000
scales=.1,.1
I tried with the built-in camera and connected my phone(IP) and got 1080 on both with smooth results. I didn't find anywhere to change the webcam settings which are stuck on 2304x1536. Where would camera settings be located?
After searching around for a solution to this issue myself I finally found it!
In the darknet/src/ folder is a file named "image_opencv.cpp". At lines 597 and 598 you will find the following 2 commented commands:
//cap->set(CV_CAP_PROP_FRAME_WIDTH, 1280);
&
//cap->set(CV_CAP_PROP_FRAME_HEIGHT, 960);
After trying out these commands a lot more errors showed up, this is due to yolov4 (and my install) using OpenCV 4.1.1. Which has a different syntax. Your resolution should change to 1920x1080 if you replace the two aforementioned commands with these:
cap->set(cv::CAP_PROP_FRAME_WIDTH, 1920);
cap->set(cv::CAP_PROP_FRAME_HEIGHT, 1080);
Notice that the comment slashes have been removed as to activate the commands.

OpenCV with multiple webcams - how to tell which camera is which in code?

Previously I've used industrial cameras with Ethernet connections and distinct IP addresses for multiple camera setups. Now I'm attempting a multiple camera setup with OpenCV and I'm not sure how to match the OpenCV VideoCapture ID to a certain camera.
I should probably use my current situation as an example to make my question more clear. I currently have 3 cameras connected. I'm using Ubuntu 18.04 if that matters. Here is my output from lsusb (omitting everything except the 3 Logitech webcams I have connected):
$ lsusb
Bus 001 Device 013: ID 046d:0843 Logitech, Inc. Webcam C930e
Bus 001 Device 003: ID 046d:0843 Logitech, Inc. Webcam C930e
Bus 001 Device 006: ID 046d:0892 Logitech, Inc. OrbiCam
As you can see I have 2 C930es and one OrbiCam connected. Based on this very helpful post:
https://superuser.com/questions/902012/how-to-identify-usb-webcam-by-serial-number-from-the-linux-command-line
I found I could get the serial number of the cams like so:
$ sudo lsusb -v -d 046d:0843 | grep -i serial
iSerial 1 D2DF1D2E
iSerial 1 99A8F15E
$ sudo lsusb -v -d 046d:0892 | grep -i serial
iSerial 1 C83E952F
Great, so I now have a way to uniquely identify each camera based on the serial numbers stored in the cam's memory (D2DF1D2E, 99A8F15E, and C83E952F).
The problem is, opening a webcam connection in OpenCV is done as follows:
vidCapForCamX = cv2.VideoCapture(OPEN_CV_VID_CAP_ID_FOR_CAM_X)
vidCapForCamY = cv2.VideoCapture(OPEN_CV_VID_CAP_ID_FOR_CAM_Y)
vidCapForCamZ = cv2.VideoCapture(OPEN_CV_VID_CAP_ID_FOR_CAM_Z)
Where camera X, Y, and Z are the 3 cameras I need to use, each for a different determined purpose, and OPEN_CV_VID_CAP_ID_FOR_CAM_X, Y, and Z are the OpenCV VideoCapture IDs. Right now, I'm relating cameras to the OpenCV VideoCapture IDs with the following manual process:
1) Make a test script like this:
# cam_test.py
import numpy as np
import cv2
cap = cv2.VideoCapture(4)
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 1920)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 1080)
while True:
# Capture frame-by-frame
ret, frame = cap.read()
# Display the resulting frame
cv2.imshow('frame', frame)
keyPress = cv2.waitKey(10)
if keyPress == ord('q'):
break
# end if
# end while
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
2) Try numbers 0-99 for the VideoCapture parameter until I find the 3 magic numbers for my 3 attached cameras. In my current example they are 0, 2, and 4.
3) Each time I find a valid VideoCapture ID, wave my hand in front of each camera until I determine which one that VideoCapture ID is for, then write down which camera in my project that needs to correspond to, ex in my case:
0 => serial D2DF1D2E => cam X
2 => serial 99A8F15E => cam Y
4 => serial C83E952F => cam Z
4) Edit my code (or a stored config file or database field) so cam X uses VideoCapture ID 0, cam Y uses VideoCapture ID 2, etc.
I should clarify that cameras X, Y, and Z are in different positions and serve different purposes, i.e. if I use VideoCapture ID 4 for cam X the application wouldn't work (they have to be mapped a certain way as above).
Clearly for a production application this routine is not acceptable.
I realize I can do something like this:
import cv2
openCvVidCapIds = []
for i in range(100):
try:
cap = cv2.VideoCapture(i)
if cap is not None and cap.isOpened():
openCvVidCapIds.append(i)
# end if
except:
pass
# end try
# end for
print(str(openCvVidCapIds))
To get a list of the valid OpenCV VideoCapture IDs, but I still have to do the manual hand wave thing to determine which OpenCV VideoCapture IDs corresponds to each camera.
To make matters worse, swapping which camera is connected to which physical port on a device shuffles the OpenCV VideoCapture IDs, so if any camera connection is changed, or a cam is added or removed the manual process has to be repeated for all cameras.
So my question is, is there some genius way (in code, not a manual way) to relate the serial number of each camera or some other unique ID stored in the cam's memory to the magic numbers that OpenCV seems to come up with for VideoCapture IDs?
To put my question another way, I need to write a function camSerialNumToOpenCvVidCapId that could be used like so:
vidCapForCamX = cv2.VideoCapture(camSerialNumToOpenCvVidCapId(D2DF1D2E))
vidCapForCamY = cv2.VideoCapture(camSerialNumToOpenCvVidCapId(99A8F15E))
vidCapForCamZ = cv2.VideoCapture(camSerialNumToOpenCvVidCapId(C83E952F))
Is this possible and how could this be done?
P.S. I'm comfortable with OpenCV C++ or Python, any helpful answers using either would be greatly appreciated.
--- Edit ---
This question:
OpenCV VideoCapture device index / device number
Has a response (not accepted) that pertains to using Windows API calls, but I'm using Ubuntu.
--- Edit2 ---
# Micka, here is what I have for cameras in /dev/:
$ ls -l /dev/video*
crw-rw----+ 1 root video 81, 0 Nov 20 12:26 /dev/video0
crw-rw----+ 1 root video 81, 1 Nov 20 12:26 /dev/video1
crw-rw----+ 1 root video 81, 2 Nov 20 12:26 /dev/video2
crw-rw----+ 1 root video 81, 3 Nov 20 12:26 /dev/video3
crw-rw----+ 1 root video 81, 4 Nov 20 12:26 /dev/video4
crw-rw----+ 1 root video 81, 5 Nov 20 12:26 /dev/video5
I'm not sure if this helps
--- Edit3 ---
After considering this some more what I really need is a cam property in OpenCV to identify each camera uniquely. After getting a list of available VideoCapture IDs as mentioned above, if there was a property like:
serialNum = cv2.get(cv2.CAP_PROP_SERIAL_NUM)
Then it would be easy, but there does not seem to be such a property or anything similar (after checking PyCharm auto-complete for cv2.CAP_PROP_* and reading the OpenCV docs for VideoCapture).
For the solution you found, you need root privileges. On my setup with Ubuntu20 this is not required for:
udevadm info --name=/dev/video0
This outputs properties of first camera detected. Pipe it through "grep" to filter out specific property that is different for all cameras like "ID_SERIAL=". You can then use "cut" to remove beginning of this string "ID_SERIAL=" and leave just the value like:
udevadm info --name=/dev/video0 | grep ID_SERIAL= | cut -d "=" -f 2
In Python you can run external command to get this info like:
def get_cam_serial(cam_id):
# Prepare the external command to extract serial number.
p = subprocess.Popen('udevadm info --name=/dev/video{} | grep ID_SERIAL= | cut -d "=" -f 2'.format(cam_id),
stdout=subprocess.PIPE, shell=True)
# Run the command
(output, err) = p.communicate()
# Wait for it to finish
p.status = p.wait()
# Decode the output
response = output.decode('utf-8')
# The response ends with a new line so remove it
return response.replace('\n', '')
To acquire all the camera serial numbers, just loop through several camera ID's. On my setup trying camera ID 0 and 1 target the same camera. Also 2 and 4 target the second camera, so the loop can have 2 for step. Once all ID's are extracted, place them in a dictionary to be able to associate cam ID with serial number. The complete code could be:
serials = {}
FILTER = "ID_SERIAL="
def get_cam_serial(cam_id):
p = subprocess.Popen('udevadm info --name=/dev/video{} | grep {} | cut -d "=" -f 2'.format(cam_id, FILTER),
stdout=subprocess.PIPE, shell=True)
(output, err) = p.communicate()
p.status = p.wait()
response = output.decode('utf-8')
return response.replace('\n', '')
for cam_id in range(0, 10, 2):
serial = get_cam_serial(cam_id)
if len(serial) > 6:
serials[cam_id] = serial
print('Serial numbers:', serials)
It is not very difficult to do. In Linux browse to the directory
/dev/v4l/by-id/
This directory lists all the webcams connected to your system with names like usb-046d_081b_31296650-video-index0 Copy this id and use it in your code in the following manner:
cv::VideoCapture camera;
camera.open("/dev/v4l/by-id/usb-046d_081b_31296650-video-index0");
cv::Mat frame;
camera >> frame;
For different cameras you can first note down their ids and then refer them in your code.

Gazebo / Ros: How to create a camera plugin with pixel-level segmentation?

I am looking to create a camera plugin where, at each pixel of the image, I'm able to output what object it belongs to, if any. I've struggled to find a solution to this problem. Any suggestions regarding where to begin?
If u want to make a camera is Gazebo simulation than u have to use the sensor plugin or sensor element in ur robot sdf/urdf model like described here,, U can find both type of camera their, depth and rgb. For example if u want a kinect sensor(camera) which contains both the rgb and depth image than u can use bolow sdf lines in ur robot model. Here when u run this code it will publish both rgb and depth datas as shown here:, here i've used ray sensor.
<gazebo reference="top">
<sensor name='camera1' type='depth'>
<always_on>1</always_on>
<visualize>1</visualize>
<camera name='__default__'>
<horizontal_fov>1.047</horizontal_fov>

<depth_camera>
<output>depths</output>
</depth_camera>
<clip>
<near>0.1</near>
<far>100</far>
</clip>
</camera>
<plugin name='camera_controller' filename='libgazebo_ros_openni_kinect.so'>
<alwaysOn>true</alwaysOn>
<updateRate>30.0</updateRate>
<cameraName>camera</cameraName>
<frameName>/camera_link</frameName>
<imageTopicName>rgb/image_raw</imageTopicName>
<depthImageTopicName>depth/image_raw</depthImageTopicName>
<pointCloudTopicName>depth/points</pointCloudTopicName>
<cameraInfoTopicName>rgb/camera_info</cameraInfoTopicName>
<depthImageCameraInfoTopicName>depth/camera_info</depthImageCameraInfoTopicName>
<pointCloudCutoff>0.4</pointCloudCutoff>
<hackBaseline>0.07</hackBaseline>
<distortionK1>0.0</distortionK1>
<distortionK2>0.0</distortionK2>
<distortionK3>0.0</distortionK3>
<distortionT1>0.0</distortionT1>
<distortionT2>0.0</distortionT2>
<CxPrime>0.0</CxPrime>
<Cx>0.0</Cx>
<Cy>0.0</Cy>
<focalLength>0.0</focalLength>
</plugin>
</sens
or>
</gazebo

Haartraining opencv

I am tryng to train cascades using haar training.I have used the following parameters.
C:\opencv\opencv_bin\bin>opencv_haartraining -data haar -vec train.vec -bg neg.
txt -numPos 1000 -numNeg 2000 -nstages 10 -mem 2000 -mode all -w 30 -h 32
but i am getting the following error
Data dir name: haar
Vec file name: train.vec
BG file name: neg.txt, is a vecfile: no
Num pos: 2000
Num neg: 2000
Num stages: 10
Num splits: 1 (stump as weak classifier)
Mem: 2000 MB
Symmetric: TRUE
Min hit rate: 0.995000
Max false alarm rate: 0.500000
Weight trimming: 0.950000
Equal weights: FALSE
Mode: BASIC
Width: 30
Height: 32
Applied boosting algorithm: GAB
Error (valid only for Discrete and Real AdaBoost): misclass
Max number of splits in tree cascade: 0
Min number of positive samples per cluster: 500
Required leaf false alarm rate: 0.000976563
Tree Classifier
Stage
+---+
| 0|
+---+
Number of features used : 234720
Parent node: NULL
*** 1 cluster ***
OpenCV Error: Unspecified error (Vec file sample size mismatch) in icvGetHaarTra
iningDataFromVec, file C:\Downloads\Software\OpenCV-2.2.0-win\OpenCV-2.2.0\modul
es\haartraining\cvhaartraining.cpp, line 1929
terminate called after throwing an instance of 'cv::Exception'
what(): C:\Downloads\Software\OpenCV-2.2.0-win\OpenCV-2.2.0\modules\haartrain
ing\cvhaartraining.cpp:1929: error: (-2) Vec file sample size mismatch in functi
on icvGetHaarTrainingDataFromVec
This application has requested the Runtime to terminate it in an unusual way.
Please contact the application's support team for more information.
C:\opencv\opencv_bin\bin>cmd |as.txt
'as.txt' is not recognized as an internal or external command,
operable program or batch file.
i am using a vec file having 1000 samples which i downloaded from the internet,and have 2000 negative samples.
"Vec file sample size mismatch" - Try checking the site for the size of the samples. The vec file may not be the one for 30x32 images(which you are trying to pass as -w 30 -h 32).
This is just a guess. Try it. And try using traincascade object. It is there in $OpencvDir$/apps/traincascade/. Compile it like any other object. It can be used for LBP and HOG as well.
Hope this helps.
Regards,
Prasanna S
The ratio of w and h is different from the setting in info.txt. You should modify w's and h's of all images in info.txt int 30:32.

Resources