I'm trying to setup a ROS navigation stack on gazebo simulated pepper robot. For my local planner I'm using dwa planner. Here is my yaml file for local planner.
When I give my stack a nav goal via rviz, no matter what goal I gave to robot, he simply starts rotating in place like in this video. Thanks!
So i looked at my odom topic and everything seems to be alright. Here is my odom topic. In topic the frame of the odometry is pepper_robot/odom and child frame pepper_robot/base_link but when I look in my tf_tree I'cant see odometry frame. Here is my tf_tree. And I checked my odometry information is in form of nav_msgs/Odometry.
Related
I created an openvr driver and it works fine. I have tried adding a device in the driver: TrackedDeviceClass_GenericTracker, this tracker device receives 3rd party positioning data via UDP, and overlays the positioning data with the helmet positioning data by using trackingoverrides; I don't use a base station.
The problem now is:
When I use trackingoverrides to override the HMD positioning, the window of the steamVR application immediately pops up a prompt box, prompting me to set the room, the error code is C200; and the VR view is not displayed on the PC.
enter image description here
I see there are some methods that say other chaperone json files can be used directly, but I don't know how to use it, can anyone tell me the details? How to unify the coordinate system in the openvr driver, and how to avoid the frequent pop-up of room settings in steamvr?
I want to use the position data in the third-party positioning data in the driver (without its rotation), and then use the rotation data of the HTC VIVE helmet's own gyroscope, and combine the third-party position data with the rotation data of the HTC vive helmet's gyroscope. After fusion, I set the positioning and rotation of the helmet together, because I know its own gyro rotation data is very accurate. Is my method correct? And how to get the rotation data of HTC vive helmet gyroscope through openvr driver? Note that I do not use lighthouse base stations.
I have tested a lot of examples, but all of them have the above problems, including the vive official git sample, OpenVR-driver-for-DIY, Simple-OpenVR-Driver-Tutorial and many more.
Hope to get help, thanks. I asked the same question on the steam forum because I was really bothered by it for a long time, I hope to get help, thanks.
I am working on a project that will display objects below the ground using AR Quick Look. However, the AR mode seems to bring everything above the ground based on the bounding box of the objects in the scene.
I have tried using the USDZ directly and composing a simple scene in Reality Composer with the object or with a simple cube with the exact same result. AR preview mode in Reality Composer is showing the object below the ground or below an image anchor correctly. However, if I export the scene as a .reality file and open it in using AR Quick Look, it brings the object above the ground as well.
Is there a way to achieve showing an object below the detected horizontal plane or image (horizontal) using AR Quick Look?
This is still an issue a year later. I have submitted feedback to Apple. I suggest you do too. I have suggested adding a checkbox to keep Y axis persistent. My assumption is this behaves this way to prevent the object from colliding with the ground, but I don't think it's necessary. It's just a limitation right now.
Currently working on licentiate detection system and need some guidance on how to proceed.
I can capture (via video playback) and with the help of an open source library called OpenALPR display the license plates directly to the terminal, now the issue is it capture on a frame by frame basis so it capture the same license plate multiple times. I added a frame skip variable and now it skips however many number of frames I want it to but the issue is still there.
Furthermore, I'd like to distinguish between different license plates if possible but don't know how to work around that, I've attempted employing basic object detection and detection but failed miserably.
Below is an image of the program running, as seen it detects a single license plate and display multiple instance of it, now the issue is I expect it to move on to the next car and display Plate#1, unfortunately it does not and continues feeding into Plate #0
Program Running
Program Running
The function that actually helps display the license plate text is below, really the first line does all the work. OpenALPR is a pretty powerful.
results = alpr.recognize_ndarray(frame)
for i, plate in enumerate(results['results']):
best_candidate = plate['candidates'][0]
print('Plate #{}: {:} ({:}%)'.format(i,
best_candidate['plate'].upper(),
best_candidate['confidence']))
I'd like some guidance towards how I can solve this problem? Which is basically distinguish between different license plates.
It is a general problem without general solution, because it highly depends on context. Some thoughts:
If it is a video feed you can track the plate movement, the track will "jump" when it detects another plate. Let say the maximum optical flow velocity is 100 px/frame, if it jumps more than this threshold, you can suppose it is a new plate.
Depending on you video quality and detector, may there be spurious jumps, I would add a Kalman filter or any simple filter.
Perhaps there is a minimum time lapse between a plate goes out the image and the next arrives. You can use a time threshold to trigger the "changed plate alert" event.
I want to make a movable camera that tracks an opened hand (toward the floor). It just needs to track the opened hand but it has to also know the rotation (2d rotation).
This is what I searched for so far:
Contour- As the camera is movable, the background is unknown, even the lighting is not fixed. It's hard for me to get a clear hand
segment in real time.
Haar- It seems this just returns a rect and can't deal with rotation.
Feature detect- A hand doesn't have enough detail for this.
I am using the Opencv Unity plugin to do this.
EDIT
https://www.codeproject.com/Articles/826377/Rapid-Object-Detection-in-Csharp
I see another library can do something like this. Can OpenCV also do this?
I am currently working on the "Xtion pro live" by using "OpenNI" library.
The problem is that the Xtion must be vertically placed (along a wall). The problem is that in this position the user calibration always fails, so it is impossible to get the Skeleton info.
So, I would like to know how to fix this issue, I suppose there is something that I didn't understand about "GetSkeletonCap().RequestCalibration()" or with the "SampleConfig.xml" file. After a lot of research however I am still stuck.
Try moving the user, followed by the camera in 360degree circumference around the subject keeping the vertical positioning of the camera the same all the way through. It may detect optimal angle on the depth sensor. We did this twice with the kinect and it worked.
Also make sure the room is well lit.