VideoCapture.open(0) won't recognize pi cam - opencv

I have been working with my Raspberry Pi 2B for a while now. Testing the Pi cam using raspistill works great but trying to use OpenCV functions such as VideoCapture.open(); won't work. trying the same command with a USB camera works just fine. I tried different indexes as inputs but nothing works for the pi cam. What am I missing here?

sudo modprobe bcm2835-v4l2
will "enable" the camera for opencv automatically.
make sure you have the camera enabled from the raspberry config, either gui or raspi-config. the above loads the necessary drivers to handle everything automatically, i.e. loads the appropriate interfaces (v4l2 drivers) for the raspberry camera.
works out of the box on raspbian jessie. other releases might include the drivers by default, but the link below contains info on compiling the drivers in your worst case. so you should be able to get this to work with pidora as well.
more info: https://www.raspberrypi.org/forums/viewtopic.php?f=43&t=62364

I assume your question is about the C++ API, not the python one? As far as I understand the raspberry pi camera is not a usb camera and as such should be approached differently. For python there is is picamera package which works like a charm (with opencv). I never used the C++ interface but a quick google leads to this

Related

Adding a CUPS printing server filter

So I just got a raspberry pi with the intention to make my Rollo label printer wireless. I installed Raspbian and PrintNode, but when it came down to downloading the Rollo driver onto the device I couldn't because it expects an 86_64x cpu and mine is armv71.
I inspected the .ppd and found that the main issue was a failed filter, bc Rollo is trying to use a filter that cups cannot find.
Is there a way to just copy a filter into CUPS? how do I add a custom filter so CUPS can find it and I can use my printer?
I've been hunting this down trying to solve the exact same problem for a bit now and have come to the conclusion that it's not possible unless one of these things happens:
Rollo releases a filter that's been compiled for the ARM arch.
Someone reverse engineers the compiled filter to recompile it for ARM.
Rollo releases the filter source code so we can compile our own for the RPi (ie. ARM).
Update: Rollo released a beta RPi driver here: https://www.rollo.com/driver-dl/beta/rollo-driver-raspberrypi-beta.zip
The first two seem the most unlikely. The third seems possible and I think I'll ask them to do just that... but who knows how they view their IP.
That said, I punted for a small fanless (well, it had a fan...) Celeron micropc that runs Ubuntu x86_64 until the Rollo filter comes out for ARM. Good luck!

Point Grey Bumblebee2 firewire 1394 with Nvidia Jetson TK1 board

I have successfully interfaced Point Grey Bumblebee2 firewire1394 camera with Nvida Jetson TK1 board and I get the video using Coriander and video for Linux loop back device is working as well. But when I tried to access camera using OpenCV and Coriander at the same time, I have conflicts. And when I tried to access the video from camera by closing the Coriander then I can get the video but in that case I am not able to change the mode and format of the video. Anyone can help me to resolve this problem. Can I change the video mode of the camera from OpenCV.
You will have to install the flycapture sdk for ARM if you want to do it manually (by code). The flycap UI software i dont believe works on ARM, let alone ubuntu 14.04, just ubuntu 12.04 x86. If you have access, what I usually do is plug it into my windows machine and use the Flycap software to change the configurations on the camera.
I found this question completely randomly, but coincidentally I am trying to interface the bumblebee2 with the jetson right now as well. Would you care to share as to what firewire mini-PCIe you used and how you went about any configurations (stock or grinch kernel, which L4T version) ?
Also, although not fully complete, you can view a code example as to how to interface with the camera using the flycaputre sdk here: https://github.com/ros-drivers/pointgrey_camera_driver. It is a ROS driver, but you can just reference the PointGreyCamera.cpp file for examples if your not using ROS.
Hope this helps
This is not well advertised, but PtGrey do not support firewire on ARM (page 4):
Before installing FlyCapture, you must have the following prerequisites:... A Point Grey USB 3.0 camera, (Blackfly, Grasshopper3, or Flea3)
Other Point Grey imaging cameras (FireWire, GigE, or CameraLink) are NOT supported
However as you have seen it is possible to use the camera (e.g. in Coriander) using standard firewire tools.
libdc1394 or the videography library should do what you need.

Jmyron and Windows 8

I am running into hardware issues that perhaps someone here knows a workaround. I am using a PC and windows.
For several years I have been making interactive installations using video tracking: the Jmyron library in Processing, which has functioned marvelously for me. I use this set up: cctv type microcameras to a multiplexer, the I digitize this signal via a firewire cable to a pci card. Then Processing reads these quads (sometimes more) as a single window, and it has always worked (from windows xp all the way to 7). Then comes windows 8: Processing seems to prefer the built-in webcam to the firewire bus. On previous version of windows, the firewire bus would naturally override the webcam, provided I had first opened a video capture in Windows Maker, and then shut it down before running the Processing sketch. In Windows 7, which had no native video capture software, I used this great open source video editor called Capture Flux. The webcam never interfered. With Windows 8, no matter what I try, Processing defaults to the webcam, which for my purposes is useless. I have an exhibition coming up real soon, and there is no way I am going to have the time to rewrite all that code for Open CV or other newer libraries.
I am curious if anyone has had similar problems, found a work around? Is there a way of disabling the webcam in Windows 8 (temporarily of course, because I need it to be operational for other applications), or some other solution?
Thank you!
Try this:
type "windows icon+x" choose device manager (or use run/command line: "mmc devmgmt.msc")
look for imaganing devices, find your integrated webcamera
right click on it and choose disable - now processing should skip the device.
Repeat the steps to reenable the device.
Other solution would be using commands in processing:
println (Capture.list()); (google it on processing.org) this way you will get all avaliable devices and you can choose the particular one based on its name.
Hope this helps.

Implementing Open CV in QTCreator on Raspberry Pi

I try to record a video with my USB Webcam in a QT GUI Application on my Raspberry Pi.
All tutoriuals I can find focus on cross-compiling. I want to program and build this directly on the Pi. I've succesfully installed QT Creator and build GUI Applications, they're working.
Questions:
Is it possible to implement OpenCV into the RaspberryPi QT Creator? If yes, how?
Is this the correct approach to solve the problem?
Is there a better way to record videos with a QT GUI Application beside Open CV?
Thank you very much!

Capturing multiple webcams (uvcvideo) with OpenCV on Linux

I am trying to simultaneously stream the images from 3 Logitech Webcam Pro 900 devices using OpenCV 2.1 on Ubuntu 11.10. The uvcvideo driver gets loaded for these.
Capturing two devices works fine, however with three I run into the out of space error for the third:
libv4l2: error turning on stream: No space left on device
I seem to be running into this issue:
http://renoirsrants.blogspot.com.au/2011/07/multiple-webcams-on-zoneminder.html and I have attempted to do the quirks=128 (or pretty much any other power-of-two value) trick but to no avail. I also tried on another machine with two USB 2.0 hubs and connecting two cameras to one and the third camera to the second, which resulted into the same problem. I am initializing roughly as follows (using N cameras so the result is actually put into an STL vector):
cv::VideoCapture cap0(0); //(0,1,2..)
and attempting to capture all the cameras in a loop as
cap0.retrieve(frame0);
This works fine for N=2 cameras. When I set N=3 the third window opens but no image appears and the console is spammed full of V4L2 errors. Similarly, when I set N=2, and attempt to open the third camera in say Cheese (simple webcam capture application), this doesn't work either.
Now comes the big but: After trying guvcview by starting three instances of that, I was able to view three cameras at once (with no problems in terms of frame rate or related), so it does not seem to be a hardware issue. I figure there is some property that I should set, but I'm not sure what that is. I have looked into MJPEG (which these cameras seem to support), but haven't succeeded into setting this property, or detect in which mode (yuyv?) they are running if I start them from OpenCV.
Thoughts?
I had this problem too and have a solution that lets me capture 2 cameras at 640x480 with mjpeg compression. I am using a Creative "Live Cam Sync HD VF0770" which incorrectly reports its bandwidth requirements. The quirks=128 fix works for 320x240 uncompressed video. But for compressed (mjpg) format the quirks=128 does not work (it does nothing for compressed formats).
To fix this I modified the uvc driver as follows:
download the kernel sources
mkdir -p ~/Software/kernel-git
cd ~/Software/kernel-git
git clone git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
git checkout v3.2
# NOTE: `uname -r` shows me my current kernel is 3.2.0-60-generic
# For a different kernel use a different tag
copy uvc dir:
mkdir -p ~/Software/uvcvideo_driver
cd ~/Software/uvcvideo_driver
#cp -a ~/Software/kernel-git/linux/drivers/media/usb/uvc .
cp ~/Software/kernel-git/linux/drivers/media/video/uvc .
modify Makefile
cd ~/Software/uvcvideo_driver/uvc
vi Makefile
obj-m += aauvcvideo.o
aauvcvideo-objs := uvc_driver.o uvc_queue.o uvc_v4l2.o uvc_video.o uvc_ctrl.o \
uvc_status.o uvc_isight.o
all:
make -C /lib/modules/$(shell uname -r)/build M=$(PWD) modules
clean:
make -C /lib/modules/$(shell uname -r)/build M=$(PWD) clean
Force bandwith to 0x400 when compressed.
cd ~/Software/uvcvideo_driver/uvc
vw uvc_video.c
Find the uvc_fixup_video_ctrl() function. At the end of the function add:
if (format->flags & UVC_FMT_FLAG_COMPRESSED) {
ctrl->dwMaxPayloadTransferSize = 0x400;
}
build the aauvcvideo module:
make
remove old module and insert new one:
sudo rmmod uvcvideo
sudo insmod ./aauvcvideo.ko quirks=128
run gucview twice with compression in 2 different windows to test
guvcview --device=/dev/video1 --format=mjpg --size=640x480
guvcview --device=/dev/video2 --format=mjpg --size=640x480
Good luck!
-Acorn
I had this exact problem, using three logitech quickcam pro 9000 cameras (using ubuntu). I could read from two, but not three. In my case, I wasn't using opencv, but was accessing the cameras through V4L2 directly, using memory-mapped IO. Simply put, there was not enough USB bandwidth to allocate three buffers.
I was reading in the uncompressed frames, however. As soon as I switched the format to MJPEG, the data was small enough, and I could read from the three cameras. I used libjpeg to decode the MJPEG stream.
I haven't looked into how to change the image format using OpenCV, but I do know that it needs to be MJPEG to fit all that data.
Before I switched to MJPEG, I spent a lot of time trying to access each camera one at a time, streaming a single frame before switching to the next. Not recommended!
Most likely there is USB bandwidth contention reported by the driver of the video capture device. Check if the pixel format is YUYV, which happens to be uncompressed. On the contrary, if the pixel format is MJPG (compressed), it is possible to have multiple devices on the same USB channel.
v4l2-ctl -d /dev/video0 --list-formats
The output would be something like below:
ioctl: VIDIOC_ENUM_FMT
Index : 0
Type : Video Capture
Pixel Format: 'YUYV'
Name : 16bpp YUY2, 4:2:2, packed
The following are the possible solutions:
Use capture devices from different manufacturers such that the drivers loaded are different. Generally the same driver handling multiple devices need to handle the bandwidth effectively.
Use a PCI USB extension card if available to attach the 2nd USB video capture device. This workaround worked excellently for me when I tried attaching AVerMedia DVD EZMaker 7 that loaded the driver cx231xx.
OpenCV can be built to use either v4l or libv4l, and only the v4l version supports compressed formats, while the libv4l version supports just one uncompressed format for OpenCV 2.4.11. (See autosetup_capture_mode_v4l2() for v4l and the code following line 692 for libv4l.) OpenCV 3.0.0 does not improve much over 2.4.11 here; it still supports only uncompressed formats for libv4l.
Since your error mentions libv4l2, you seem to have the libv4l version and OpenCV captured uncompressed in your case. To build a v4l version of OpenCV, your cmake command should contain
-D WITH_LIBV4L=OFF
(WITH_LIBV4L was enabled by default for me.)
A note on bandwidth and USB. USB 2.0 (which virtually all webcams use) has a bandwidth of 480 Mbit/s. 640x480 at 30 fps and 24 bits/pixel uncompressed is about 221 Mbit/s, so one can use up USB 2.0 bandwidth quickly with uncompressed webcams. One gets 480 Mbit/s for each USB host controller, see this answer on how to list them. (USB hubs do not add host controllers, and several USB ports on a motherboard are typically connected to the same host controller. All devices and hubs connected to a host controller share the bandwidth.)
For webcams that reserve more USB bandwidth than they need, e.g., those with footnote [13] on the UVC driver page, the FIX_BANDWIDTH quirk can help. But the quirk works only for uncompressed formats (unless you do the kernel hack in Acorn's answer here). In my case (Ubuntu 14.04, many Microsoft LifeCam Cinemas at 320x240), the quirk worked when I used the libv4l version of OpenCV (four LifeCams on an ASMedia USB host controller worked well) but for the v4l version -- which I confirmed to use MJPEG -- I got a VIDIOC_STREAMON: No space left on device error as soon as I tried to capture from a second LifeCam! (For the same machine, Intel and VIA host controllers did better and each worked with two LifeCams for v4l; the LifeCam reserves 48% of USB 2.0 bandwidth.)
this works as charm for me
sudo rmmod uvcvideo
sudo modprobe uvcvideo quirks=128
This will be reset every reboot. If this works, create the following file:
sudo vi /etc/modprobe.d/uvcvideo.conf
containing the line:
options uvcvideo quirks=128
check this link
http://renoirsrants.blogspot.in/2011/07/multiple-webcams-on-zoneminder.html
With kernel 4.15.0-34-generic on Ubuntu 18.04 and OpenCV 3.4 compiled with gstreamer/v4l support I am able to stream 3x720p on a single USB port using a powered hub using MJPG compression with gstreamer in python (using 2xC922 and 1xC920 cameras - the 10fps framerate isn't necessary for this to work):
def open_cam_usb(dev, width, height):
gst_str = (
"v4l2src device=/dev/video{} ! "
"image/jpeg,widh=(int){},height=(int){},framerate=10/1,"
"format=(string)RGB ! "
"jpegdec ! videoconvert ! appsink"
).format(dev, width, height)
return cv2.VideoCapture(gst_str, cv2.CAP_GSTREAMER)
One of the most helpful things I discovered was if you place a Sleep(ms) call in between your capture initializations. This allowed me to retrieve two webcam captures simultaneously without problem.

Resources