Gazebo / Ros: How to create a camera plugin with pixel-level segmentation? - ros

I am looking to create a camera plugin where, at each pixel of the image, I'm able to output what object it belongs to, if any. I've struggled to find a solution to this problem. Any suggestions regarding where to begin?

If u want to make a camera is Gazebo simulation than u have to use the sensor plugin or sensor element in ur robot sdf/urdf model like described here,, U can find both type of camera their, depth and rgb. For example if u want a kinect sensor(camera) which contains both the rgb and depth image than u can use bolow sdf lines in ur robot model. Here when u run this code it will publish both rgb and depth datas as shown here:, here i've used ray sensor.
<gazebo reference="top">
<sensor name='camera1' type='depth'>
<always_on>1</always_on>
<visualize>1</visualize>
<camera name='__default__'>
<horizontal_fov>1.047</horizontal_fov>

<depth_camera>
<output>depths</output>
</depth_camera>
<clip>
<near>0.1</near>
<far>100</far>
</clip>
</camera>
<plugin name='camera_controller' filename='libgazebo_ros_openni_kinect.so'>
<alwaysOn>true</alwaysOn>
<updateRate>30.0</updateRate>
<cameraName>camera</cameraName>
<frameName>/camera_link</frameName>
<imageTopicName>rgb/image_raw</imageTopicName>
<depthImageTopicName>depth/image_raw</depthImageTopicName>
<pointCloudTopicName>depth/points</pointCloudTopicName>
<cameraInfoTopicName>rgb/camera_info</cameraInfoTopicName>
<depthImageCameraInfoTopicName>depth/camera_info</depthImageCameraInfoTopicName>
<pointCloudCutoff>0.4</pointCloudCutoff>
<hackBaseline>0.07</hackBaseline>
<distortionK1>0.0</distortionK1>
<distortionK2>0.0</distortionK2>
<distortionK3>0.0</distortionK3>
<distortionT1>0.0</distortionT1>
<distortionT2>0.0</distortionT2>
<CxPrime>0.0</CxPrime>
<Cx>0.0</Cx>
<Cy>0.0</Cy>
<focalLength>0.0</focalLength>
</plugin>
</sens
or>
</gazebo

Related

cv2.VideoCapture(0, cv2.DSHOW) returns none

I'm trying to capture video from an in-build webcam on a laptop (or external USB camera) using opencv, specifically VideoCapture with the DSHOW argument.
I know there is a way to set the resolution and even FPS, however the DirectShow argument for the API returns none when I included it in the code.
For example;
# returns my webcam's stream, but all optional arguments are ignored
camera = cv2.VideoCapture(0)
camera = cv2.VideoCapture(0, cv2.CAP_V4L2)
# returns none and loops infinitely or errors out when *if im.any()*
camera = cv2.VideoCapture(0, cv2.CAP_DSHOW)
This is the code that follows after the above;
# should set resolution, settings are always ignored
camera.set(cv2.CAP_PROP_FRAME_WIDTH, 1920)
camera.set(cv2.CAP_PROP_FRAME_HEIGHT, 1080)
while(True):
retval, im = camera.read()
if im.any(): # errors out when image is none
cv2.imshow("image", im)
k = cv2.waitKey(33)
if k==27: # Esc key press
print('Resolution: {0}x and {1}y'.format(im.shape[1],im.shape[0]))
print('FPS: {0}'.format(camera.get(cv2.CAP_PROP_FPS)))
break
camera.release()
cv2.destroyAllWindows()
Is the DSHOW the correct API to use and is it the only API to use that can change resolution and FPS of a camera stream using opencv? Or is there something else I'm doing incorrectly?
More details about the system.
Ubuntu 18.04.6
python 3.9.5
opencv-python 4.5.2.52
Thank you in advance for the help!
Regards, Tiz
DSHOW (and MSMF) are windows only.
on linux, use V4L, FFMPEG or GSTREAMER
also, please check the return val of capture.set(),
not all properties/values will be supported on any given machine

Adaptive Fourier Filter

Main question
Have someone already created a free adaptive Fourier filter for Digital Micrograph (or alternatively ImageJ)?
About the adaptive Fourier filter
I want to use some effective filtering processes for my TEM image processing. I came across the adaptive Fourier filtering technique introduced by Möbus et al. in 1993 [1]. In short this is a reciprocal space filtering technique with the workflow:
FFT( Image ) --> Mask * FFT( Image ) --> iFFT( Mask * FFT( Image ) )
The new feature of this filter is that the shape of the filter is adapted to the spectrum of the image and the windows of the mask are automatically placed at all positions which allows
an optimal separation of signal from noise [2].
What have I already tried?
The filter is available in the HREM Filters Pro package from HREM Research https://www.hremresearch.com/Eng/plugin/FiltersEng.html , but my institute does not have a license for this. I have found DM scripts for other filters such as Wiener filters and average background subtracted filters on the DM script database https://www.felmi-zfe.at/dm_script, but there is no adaptive filter.
So what was the question again?
Since I have no experience with DM scripting myself, I would prefer to find or adjust an already existing DM script on adaptive Fourier filtering. Alternatively, I also do some of my image processing in ImageJ, so a script for this program would work as well. Do any of you know whether such scripts already exist?
Sources
[1] Möbus, G., G. Necker, and M. Rühle. "Adaptive Fourier-filtering technique for quantitative evaluation of high-resolution electron micrographs of interfaces." Ultramicroscopy 49.1-4 (1993): 46-65.
[2] Kret, S., et al. "Extracting quantitative information from high resolution electron microscopy." physica status solidi (b) 227.1 (2001): 247-295.
The Adaptive Threshold ImageJ plugin which can be downloaded from:
https://sites.google.com/site/qingzongtseng/adaptivethreshold
is indeed an adaptive filter.
I'm not aware of an (open source) script for this, but a base template for a Fourier-Space filtered script in DigitalMicrograph would be:
// Create and show test image
realimage img := RealImage( "Test Image 2D", 4, 512, 512 )
img = abs( itheta*2*icol/(iwidth+1)* sin(iTheta*10) ) + 15*(irow<iheight/2 ? irow : iheight-irow )/iheight
img = PoissonRandom(100*img)
img.ShowImage()
// Transform to Fourier Space
compleximage img_FFT := FFT(img)
// Create "Mask" or Filter in Fourier Space
// This is where all the "adaptive" things have to happen to create
// the correct mask. The below is just a dummy
image mask := RealImage("Mask",4, 512,512 )
mask = (iradius<iheight/3 && iradius>5 ) ? 1 : 0
mask = SQRT((icol-iwidth/2-100)**2+(irow-iheight/2-50)**2) < 25 ? 0 : mask
mask = SQRT((icol-iwidth/2+100)**2+(irow-iheight/2+50)**2) < 25 ? 0 : mask
mask.ShowImage()
// Apply mask
img_FFT *= mask
img_FFT.SetName( "Masked FFT" )
img_FFT.ShowImage()
// Transform back
image img_filter := modulus(iFFT(img_FFT))
img_filter.SetName( img.GetName() + " Filtered" )
img_filter.ShowImage()
// Just arrange
EGUPerformActionWithAllShownImages("arrange")

Change video stream resolution in YoloV4 demo

Here's what shows when loading the live stream demo for Yolov4:
Webcam index: 2
[ WARN:0] global ../modules/videoio/src/cap_gstreamer.cpp (935) open OpenCV | GStreamer warning: Cannot query video position: status=0, value=-1, duration=-1
Video stream: 2304 x 1536
Objects:
Then it starts finding objects with 2 fps.
How do I change the video stream resolution to 1080p or 720p? The frame rate is very slow and this appears to be the fix.
Can't find it within the makefile or cfg folder. Any thoughts? Is this an opencv problem?
Thanks!
cfg settings:
[net]
batch=64
subdivisions=8
# Training
#width=512
#height=512
width=320
height=320
channels=3
momentum=0.949
decay=0.0005
angle=0
saturation = 1.5
exposure = 1.5
hue=.1
learning_rate=0.0013
burn_in=1000
max_batches = 500500
policy=steps
steps=400000,450000
scales=.1,.1
I tried with the built-in camera and connected my phone(IP) and got 1080 on both with smooth results. I didn't find anywhere to change the webcam settings which are stuck on 2304x1536. Where would camera settings be located?
After searching around for a solution to this issue myself I finally found it!
In the darknet/src/ folder is a file named "image_opencv.cpp". At lines 597 and 598 you will find the following 2 commented commands:
//cap->set(CV_CAP_PROP_FRAME_WIDTH, 1280);
&
//cap->set(CV_CAP_PROP_FRAME_HEIGHT, 960);
After trying out these commands a lot more errors showed up, this is due to yolov4 (and my install) using OpenCV 4.1.1. Which has a different syntax. Your resolution should change to 1920x1080 if you replace the two aforementioned commands with these:
cap->set(cv::CAP_PROP_FRAME_WIDTH, 1920);
cap->set(cv::CAP_PROP_FRAME_HEIGHT, 1080);
Notice that the comment slashes have been removed as to activate the commands.

implementation like kinect hierarchical rotation

I would get some data stream about 3d position(in fixed world coordinate system) of a human's 20 skeletons.
I want to use the skeletons data to drive a human model with fixed bone like the demo video.
In Kinect SDK v1.8,i could get each skeleton's local rotation by NUI_SKELETON_BONE_ORIENTATION.hierarchicalRotation.
I want to implement some function like that.But the Kinect's SDK isn't open source.
I've found that the function xnGetSkeletonJointOrientation could get skeleton's rotation like that in OpenNI.But i haven't found the implement function about that.I don't know where am i wrong.
Any idea is appreciated.Thanks!
EDIT
I have found a similar question.
Here is the code he used finally.
Point3d Controller::calRelativeToParent(int parentID,Point3d point,int frameID){
if(parentID == 0){
QUATERNION temp = calChangeAxis(-1,parentID,frameID);
return getVect(multiplyTwoQuats(multiplyTwoQuats(temp,getQuat(point)),getConj(temp)));
}else{
Point3d ref = calRelativeToParent(originalRelativePointMap[parentID].parentID,point,frameID);
QUATERNION temp = calChangeAxis(originalRelativePointMap[parentID].parentID,parentID,frameID);
return getVect(multiplyTwoQuats(multiplyTwoQuats(temp,getQuat(ref)),getConj(temp)));
}}
QUATERNION Controller::calChangeAxis(int parentID,int qtcId,int frameID){ //currentid = id of the position of the orientation to be changed
if(parentID == -1){
QUATERNION out = multiplyTwoQuats(quatOrigin.toChange,originalRelativePointMap[qtcId].orientation);
return out;
}
else{
//QUATERNION temp = calChangeAxis(originalRelativePointMap[parentID].parentID,qtcId,frameID);
//return multiplyTwoQuats(finalQuatMap[frameID][parentID].toChange,temp);
return multiplyTwoQuats(finalQuatMap[frameID][parentID].toChange,originalRelativePointMap[qtcId].orientation);
}}
But i still have some question about that.
What does the variables quatOrigin.toChange and originalRelativePointMap stand for?
And in my opinion,the parameter Point3d point of the function Controller::calRelativeToParent should be a vector with euler angle.In this way,how to call the Controller::calRelativeToParent API in the main program.Because we know the root's rotation only.
The skeleton class has a "Joints" member that contains all the 3d position data for each tracked joint on the skeleton. I would look at the joint position data directly to drive your model rather than angles. Take one point to be your base (head or otherwise) then generate vectors in tree form between pairs of connected skeletal points. Scale those vectors and apply them to your model.

OpenCV with kinect begineer's doubts

I have OpenCV and libfreenect configured on my ubuntu 11.04 and works seperately.
I also have some experience with OpenCV but the problem is i don't know how to combine both kinect and OpenCV.I was hoping if someone would kindly help me out by pointing to a good documentation or providing a simple sample code of using kinect in opencv.
The first link on google for "OpenCV kinect" was this. I hope it helps.
To quickly get things working, I would recommend including opencv libraries to one of the openni samples (for example NiUserTracker). There you can acquire the depth image from the DepthMetaData object in the following way.
//obtain depth image
DepthMetaData depthMD;
g_DepthGenerator.GetMetaData(depthMD);
const XnDepthPixel* g_Depth = depthMD.Data();
cv::Mat DepthBuf(480,640,CV_16UC1,(unsigned char*)g_Depth);
//To display the depth image you probably would want to normalize it to 0-255 range first
//obtain rgb image
ImageMetaData ImageMD;
g_ImageGenerator.GetMetaData(ImageMD);
const XnUInt8* g_Img =ImageMD.Data();
cv::Mat ImgBuf(480,640,CV_8UC3,(unsigned short*)g_Img);
cv::Mat ImgBuf2;
cv::cvtColor(ImgBuf,ImgBuf2,CV_RGB2BGR);
To get work MrglMrgl code, I've had to add the following at the beginning:
nRetVal = g_Context.FindExistingNode(XN_NODE_TYPE_IMAGE, g_ImageGenerator);
if (nRetVal != XN_STATUS_OK)
{
printf("No image node exists! Check your XML.");
return 1;
}
And this at the final:
cv::namedWindow( "Example1", CV_WINDOW_AUTOSIZE );
cv::imshow( "Example1", ImgBuf2 );
cv::waitKey(0);

Resources