my current situation is that I need to run a simulation on htc vive using ROS. For now I have connected ROS and v-rep on virtual box, ubuntu. I have connected htc vive on windows and setted it up with steam vr. I would like to set it up so that I could stream simulation through virtual box using ROS on htc vive which is on windows.
I know that I need to write a script for that, ROS using TCP so I can connect vive which is on windows, but I am also interested if someone can explain me more about that, or someone has better solution for running a simulation on vive using ROS since I haven't done something like that before? Or is it simpler to use oculus rift with dual boot and put ROS there?
as no one has answered you might guess there is a bigger gap....you would have to visualize your ros worl. Also you need a two way comminication...thought you have to study some more technolgies and come with a problem
Related
I wrote a C++ program in Visual Studio for anomaly detection using OpenCV. I'm now able to capture images with Basler Ace camera and process the captured image in visual studio. Camera is connected to computer directly with USB 3.0.
My next step is to synchronize image capturing and processing with robot movement. I have IRB1440 ABB robot.
What are the possible solutions for this? Should I buy expensive Siemens PLCs ? Can the solution be found in any other way? What type of communication should I use?
This is a very specific question on a commercial product. I suggest you contact ABBs support and read the robot controllers manual. There you will find information on how to interface it in a safe manner.
It is not the robot you want to talk to, it is its controller!
According to the info I found on IRB 1440 (seem to be a sub-model of IRB1400) the controller is a S4Cplus.
The way we usually do it is a Windows PC based image processing system that is hooked up to a PLC (Siemens, Mitsubishi,...) which forwards our coordinates, angles and whatnot to the robot controller.
Of course the PLC can be omitted if your PC is the "boss" of the entire system.
S4Cplus Product Specifications
This controller comes with various interfacing options including RS232, RS422, Ethernet as well as a whole bunch of industry standards.
Having a separate PLC is not necessary as the controller may serve as a PLC itself, although you might have to upgrade the controller using so called I/O nodes.
But as I said, refer to the manuals and ABB support.
Obviously any non-realtime solution like a Windows PC is not an option for any safety features.
I have never ever asked this kind of question on StackOverflow before, and I wonder if you could help me guys because it is a "bit" vague.
I have to design a project that uses Teensy (simple ARM platform) for getting data from IR camera (Flir, resolution 80x60) over SPI, and streaming these data to Linux/Windows running machine (through USB-serial) and doing something simple with OpenCV.
THE PROBLEM: The project lacks some "inovation". It should not be something very complicated, but rather different approach, or trying something new.
Do you have recommendations/tutorials/books/experience with working with above mentioned things? OR do you see a potential for teying something new?
You might want to check out the OpenCV Cookbook for some ideas.
There is a project using this FLIR with a Teensy. It provides a thermal image using a small LCD screen (without any aditional computer).
https://hackaday.io/project/8994-diy-thermocam
So, the teensy can get data through spi.
Can the teensy send data through usb then ? Probably but you will have to check if the rate is high enough
.
Using OpenCV directly on teensy is not possible because of size of library. But you can probably make some basic image processing if the code is small enough.
The FLIR Lepton can be directly interfaced with Linux or Windows computer, so I don't really see the use of Teensy.
I would recommend a Raspberry Pi to interface the FLIR Lepton and then do some image processing. It's well documented on the web.
I recently purchased a Minoru 3d Webcam (http://www.minoru3d.com/) in the hopes of using it to do stereo vision in OpenCV. I thought I had done the proper research before ordering it verifying that it would work, but all of those resources are a number of years old.
At the moment, though OpenCV can be ignored. I am using processing just trying to access both cameras separately. It would appear some people have had success in various languages, but the documentation is sparse and in the end just takes me in circles.
Running a Capture.list() command in Processing produces a list shows
name=Vimicro USB2.0 UVC PC Camera,size=640x480,fps=5
name=Vimicro USB2.0 UVC PC Camera,size=640x480,fps=30
etc
name=Vimicro USB2.0 UVC PC Camera,size=640x480,fps=5
name=Vimicro USB2.0 UVC PC Camera,size=640x480,fps=30
etc
My Laptops Webcam
Although I can access the first set, the duplicates are blank, and other software has the device "Minoru 3D Webcam", such as Skype, etc. With this in mind, I have only been able to see the device working in one piece of capturing software, which was installed with the device from a CD. Skype has it listed, but says its in use, or just waits and waits. Note, it is possible to change from a Red/Blue to this side by side.
I am running Windows 7 64 Bit, and did my best to find the most recent drivers. If I had a Linux computer working I would definitely try on that, but at the moment that's not an option.
If I could just access the one "Minoru 3d Webcam" with it side by side, that'd be great. But even hearing that it definitely wont work would be helpful.
I have this configuration (windows 7 64 bits, opencv 2.4.9).
To make minoru 3d functional, i have re-compile opencv with USE_DSHOW flag on.
In fact, it's only necessary to have a new opencv_highgui249.lib and dll re-compiled
For DirectShow, you'll need Windows SDK
I have had exactly the same problem as you (Windows 7 Enterprise , 64 bit). I am currently at the Opencv master branch, building for Visual Studio 2010 C++.
After several evenings failing to capture both Minoru cameras with e.g. :
VideoCapture cap1(1);
::Sleep(200);
VideoCapture cap2(2);
if (!cap1.isOpened() || !cap2.isOpened()) {
return -1;
}
... // stereo calibration
I found out by trial and error that both cameras were captured correctly if:
Used the default Microsoft Vimicro USB2.0 PC Camera driver. I.e. I have completely uninstalled the Minoru software coming with the CD.
Only plugged the Minoru into a USB 2.0 port. If i plug the Minoru into a USB 3.0 port, both cameras light up but OpenCV only captures from one of the cameras - rather unusable for stereo vision.
I found a simple application running opencv with python on a raspberry pi that can help you. The code used for processing the image is:
Example.py
import cv2
import numpy as np
c = cv2.VideoCapture(0)
c.set(3,1280)
c.set(4,480)
while(1):
_,visao = c.read()
esquerdo = visao[0:480, 0:640]
direito = visao[0:480, 640:1280]
cv2.imshow('esquerdo',esquerdo)
cv2.imshow('direito',direito)
if cv2.waitKey(5)==27:
break
cv2.destroyAllWindows()
The reference is -> http://jeaeletronica.blogspot.com.br/2013/07/how-to-run-minoru-3d-webcam-on.html .
i want to do a project which uses eye tracking, is it possible to port an open cv code on a microcontroller.
i am new to opencv as well as microcontroller so can any one tell me if it is possible to make a code which works like this vedio.
http://www.youtube.com/watch?feature=endscreen&v=eBtpKAja-m0&NR=1
Q: Can i use an eye detecting opencv code on microcontroller?
A: Yes, you can
Q: Is it possible to port an open cv code on a microcontroller
A: OpenCV is already in the Unix and Android platform. The easiest approach therefore will be to get hold of some embedded device with ARM. There are a lot of help available for the 'OpenCV-ARM' combination.
Beagleboard and RasberryPi are the cheapest embedded ARM devices available for less than $150. Sometimes they come preloaded with Unix boot system and opencv2.0. Thus it would be so easy to run the executable that you created in the computer system.
Be aware of the speed of the processor. If your algorithm is computationally intensive then you wont be quiet satisfied with the output being obtained in the low-end embedded devices.
If some ARM embedded Linux board can fit into your definition of microcontroller, then there is nothing to port.
http://www.google.com/search?q=opencv+arm
I was wondering if it would be possible to capture the live video from my integrated webcam using Labview 2011(National Instruments). All I need to do for now is put the camera in the front panel. This is not a USB Webcam. It is a chicony USB 2.0 Camera(does not show up as usb on my pc). Can anyone help me?
LV2012? Is this beta?
The best way to do this is using IMAQdx drivers+Vision Developement module. AFter installing IMAQdx, USB cams usually already show up in Measurement and Automation Explorer and you can try out Snap/Grab... (Tip: Do install whatever driver is included with the hardware/on a cd.)
Then, in LV, just drop the "IMAQ Acquisition Express" vi into your block diagram and you'll be guided through a very quick and easy setup.
I'm not much into Express vis, but that one is good.
If you don't have Vision Dev Module, look into ADVision (http://vi-lib.com/). It does the same thing, just with OpenCV, but I don't think that every driver is supported.
Also, remember only USB cameras that have DirectShow filter are supported by the Vision Acquisition Software, which has the IMAQdx that Birgit P. mentioned.
for usb2 you need imaqdx toolkit in vision acquisition part
also check NIMax after installation to see if labview could find your camera or not
labview could find and support all useb2 camera if you instal camera diver correctly