I have successfully interfaced Point Grey Bumblebee2 firewire1394 camera with Nvida Jetson TK1 board and I get the video using Coriander and video for Linux loop back device is working as well. But when I tried to access camera using OpenCV and Coriander at the same time, I have conflicts. And when I tried to access the video from camera by closing the Coriander then I can get the video but in that case I am not able to change the mode and format of the video. Anyone can help me to resolve this problem. Can I change the video mode of the camera from OpenCV.
You will have to install the flycapture sdk for ARM if you want to do it manually (by code). The flycap UI software i dont believe works on ARM, let alone ubuntu 14.04, just ubuntu 12.04 x86. If you have access, what I usually do is plug it into my windows machine and use the Flycap software to change the configurations on the camera.
I found this question completely randomly, but coincidentally I am trying to interface the bumblebee2 with the jetson right now as well. Would you care to share as to what firewire mini-PCIe you used and how you went about any configurations (stock or grinch kernel, which L4T version) ?
Also, although not fully complete, you can view a code example as to how to interface with the camera using the flycaputre sdk here: https://github.com/ros-drivers/pointgrey_camera_driver. It is a ROS driver, but you can just reference the PointGreyCamera.cpp file for examples if your not using ROS.
Hope this helps
This is not well advertised, but PtGrey do not support firewire on ARM (page 4):
Before installing FlyCapture, you must have the following prerequisites:... A Point Grey USB 3.0 camera, (Blackfly, Grasshopper3, or Flea3)
Other Point Grey imaging cameras (FireWire, GigE, or CameraLink) are NOT supported
However as you have seen it is possible to use the camera (e.g. in Coriander) using standard firewire tools.
libdc1394 or the videography library should do what you need.
Related
The examples and documentation for the Spresense have a lot of very clear information, yet I think there's something missing for using digital mics with the Arduino IDE. Modifications to the extension board for using digital mics are very clearly documented with nice pictures. The Arduino example projects are great, showing you to record, encode, etc. And I've also understood you must tell the recorder to use the digital microphones with the following:
theAudio->setRecorderMode(AS_SETRECDR_STS_INPUTDEVICE_MIC_D);
There are also nice details in the audio documentation explaining that CXD56_AUDIO_MIC_CHANNEL_SEL must be changed from the default value of 0xFFFF4321, which is for analog microphones, to values for digital microphones. I've been able to follow the instructions for rebuilding the Nuttx kernel and spresense SDK with a new value of 0xCBA98765 which should enable eight digital mics. The last piece that is not clear is what nuttx/sdk binary files now need to be copied over to the Arduino environment. I have a Windows PC for use with the Arduino IDE and I have a Linux PC for building Nuttx and those examples. Can you please list which files on the Linux machine that I need to copy over to the Windows PC for the Arduino IDE to use the SDK that enables the digital mics? Sorry if this is documented somewhere and I overlooked it!
The instructions provided by Sony to record using the digital mic work fine! It was a hardware problem with my microphones. I was able to use the nuttx example named audio_recorder. I haven't tried with Arduino and the process of copying files from a nuttx build to the arduino build folders is still not very clear, but that's a separate issue.
I have been working with my Raspberry Pi 2B for a while now. Testing the Pi cam using raspistill works great but trying to use OpenCV functions such as VideoCapture.open(); won't work. trying the same command with a USB camera works just fine. I tried different indexes as inputs but nothing works for the pi cam. What am I missing here?
sudo modprobe bcm2835-v4l2
will "enable" the camera for opencv automatically.
make sure you have the camera enabled from the raspberry config, either gui or raspi-config. the above loads the necessary drivers to handle everything automatically, i.e. loads the appropriate interfaces (v4l2 drivers) for the raspberry camera.
works out of the box on raspbian jessie. other releases might include the drivers by default, but the link below contains info on compiling the drivers in your worst case. so you should be able to get this to work with pidora as well.
more info: https://www.raspberrypi.org/forums/viewtopic.php?f=43&t=62364
I assume your question is about the C++ API, not the python one? As far as I understand the raspberry pi camera is not a usb camera and as such should be approached differently. For python there is is picamera package which works like a charm (with opencv). I never used the C++ interface but a quick google leads to this
I have installed OpenCV on my desktop and laptop which they have Ubuntu 14, and i have some problem with its image viewer.
First of all when i type :
./facedetect --cascade="/usr/local/share/OpenCV/haarcascades/haarcascade_frontalface_alt.xml" --nested-cascade="/usr/local/share/OpenCV/haarcascades/haarcascade_eye.xml" --scale=1.5 [address of my image]
It shows my image with its image viewer , but it isn't resizabe on my desktop and it don't show control buttons at top of it on my laptop.
How can i fix these problem or can i change its image viewer ?
Opencv uses in many demo applications its own GUI (highgui), its features are limited and are platform-dependent. For example, I think that the "auto-zoom" feature that enables you to see the pixel values is available only on Windows. And, although recent versions added some Qt support to add somes features (buttons,...), the app has to be build to enable these features, and this is probably not the case in your example.
However, you can always edit the code of these apps (here, the facedetect app) so that it just saves the images on disk, instead of showing them on screen. Then rebuild. Or add yourself the buttons you want, see the manual.
I am running into hardware issues that perhaps someone here knows a workaround. I am using a PC and windows.
For several years I have been making interactive installations using video tracking: the Jmyron library in Processing, which has functioned marvelously for me. I use this set up: cctv type microcameras to a multiplexer, the I digitize this signal via a firewire cable to a pci card. Then Processing reads these quads (sometimes more) as a single window, and it has always worked (from windows xp all the way to 7). Then comes windows 8: Processing seems to prefer the built-in webcam to the firewire bus. On previous version of windows, the firewire bus would naturally override the webcam, provided I had first opened a video capture in Windows Maker, and then shut it down before running the Processing sketch. In Windows 7, which had no native video capture software, I used this great open source video editor called Capture Flux. The webcam never interfered. With Windows 8, no matter what I try, Processing defaults to the webcam, which for my purposes is useless. I have an exhibition coming up real soon, and there is no way I am going to have the time to rewrite all that code for Open CV or other newer libraries.
I am curious if anyone has had similar problems, found a work around? Is there a way of disabling the webcam in Windows 8 (temporarily of course, because I need it to be operational for other applications), or some other solution?
Thank you!
Try this:
type "windows icon+x" choose device manager (or use run/command line: "mmc devmgmt.msc")
look for imaganing devices, find your integrated webcamera
right click on it and choose disable - now processing should skip the device.
Repeat the steps to reenable the device.
Other solution would be using commands in processing:
println (Capture.list()); (google it on processing.org) this way you will get all avaliable devices and you can choose the particular one based on its name.
Hope this helps.
I am using Martin Peris code for 3D reconstruction using OpenCV and PCL (link below):
http://blog.martinperis.com/2012/01/3d-reconstruction-with-opencv-and-point.html
Trouble point:
I am having trouble with the final step in viewing the 3D reconstruction in the "3D viewer" window. I am getting a perfect disparity image as shown in the blog but my final reconstruction image looks like this:
https://drive.google.com/file/d/0Bx1aNPhwJU4kMmt1cUVHVXBOLWM/edit?usp=sharing
You can compare this with the one which is shown in the video link given in that blog.
Things that I have tried:
Checked if all the required libraries are installed. I believe otherwise the code wouldn't compile and give me any results.
Checked if I have a graphics support on my machine:
$lspci | grep VGA
09:00.0 VGA compatible controller: NVIDIA Corporation G71GL [Quadro FX 3500] (rev a1)
My doubts:
If there is some library missing for OpenGL or OpenCV or PCL which is making the 3D reconstruction window suffer.
The controversial reprojectImageTo3D() function in OpenCV which is also used in the code by Martin Peris.
Some other reason that one of you could help me with ;-)
Other details:
Ubuntu Version : 12.04
OpenCV Version : 2.3.1-7
Any suggestions would really be helpful!
Thanks,
Pratul
RESOLVED!
It was actually a driver issue with my graphics card. To solve it I wiped off the currently installed driver and then re-installed an updated one and that worked like a charm.
Details of this solution I have posted on the PCL mailing list as I didn't wanted to repeat myself here.
enter link description here
I hope this helps.