When accessing webcam through python opencv, isOpened() returns False and no data is read. Does it has to do something with wsl?
According to this github issue, hardware access is not supported yet in WSL:
Ben Hillis: Hardware access is another area we will be investigating in the future.
All hardware related CLI tools seem to fail (dmesg, lsblk, lsusb returns nothing, /dev is empty...) so it looks like this statement is still valid today. That explains why you cannot access your camera in OpenCV.
Related
I'm new to IP cameras and I know there are quite a lot of topics about this in the forum already, but I can't find a concrete answer for my needs.
I want to access an IP camera using OpenCV in Python from a Windows PC. As I don't have a camera yet, I need to buy one and I can't figure out, what requirements this camera needs to have.
For example, there are quite cheap IP cameras (e.g. Xi****) which say they come with an Android or iOS app and are only accessible via those.
I thought you can access any IP cam via OpenCV, but now I'm not sure anymore... can anyone give me an overview, what specs an IP cam needs, to be accessed via OpenCV on Windows? I don't want to buy a camera and later realize, that I can't access the video stream.
I'm really sorry, if this has already been asked, but I can't find a satisfying answer to this question and Google doesn't seem to be very helpful...
Thanks in advance.
check for IP cam that can transmit RTSP opencv know how to work with this type of stream.
I have never ever asked this kind of question on StackOverflow before, and I wonder if you could help me guys because it is a "bit" vague.
I have to design a project that uses Teensy (simple ARM platform) for getting data from IR camera (Flir, resolution 80x60) over SPI, and streaming these data to Linux/Windows running machine (through USB-serial) and doing something simple with OpenCV.
THE PROBLEM: The project lacks some "inovation". It should not be something very complicated, but rather different approach, or trying something new.
Do you have recommendations/tutorials/books/experience with working with above mentioned things? OR do you see a potential for teying something new?
You might want to check out the OpenCV Cookbook for some ideas.
There is a project using this FLIR with a Teensy. It provides a thermal image using a small LCD screen (without any aditional computer).
https://hackaday.io/project/8994-diy-thermocam
So, the teensy can get data through spi.
Can the teensy send data through usb then ? Probably but you will have to check if the rate is high enough
.
Using OpenCV directly on teensy is not possible because of size of library. But you can probably make some basic image processing if the code is small enough.
The FLIR Lepton can be directly interfaced with Linux or Windows computer, so I don't really see the use of Teensy.
I would recommend a Raspberry Pi to interface the FLIR Lepton and then do some image processing. It's well documented on the web.
My laptop has two video cards, a high powered NVIDIA one and an onboard Intel one. When I call IDirect3D9::GetAdapterCount however, it only finds the onboard Intel one, probably because the high powered one is being hidden.
I'm able to go into my laptop settings and tell it 'force choose' the NVIDIA card, and then it works, but this is not an acceptable solution for my end-users. I've also noticed that when I run Battlefield3, it's able to properly find the NVIDIA card even without 'force choose' enabled. Maybe there's a special white-list that has Battlefield listed? Or some other secret method?
Any ideas how to acquire that elusive card?
Are you sure the intel chip is enumeratable? Quite often its not. By sticking in a discrete GPU the sandybridge (and older) chipset is generally disabled. You probably want to check the Nvidia optimus test tool.
GetAdapterCount will actually returns count of the monitors in system, not videocards. And as far as I know there is no way to force choose it programmatically.
If you talking about nVidia optimus technology, it choose videochip using driver settings.
I'm writing LabVIEW software that grabs images from an IMAQ compatible GigE camera.
The problem: This is a collaborative project, so I only have intermittent access to the actual camera.I'd like to be able to keep developing this software even when the camera isn't present.
Is there a simple/fast way to create a virtual or dummy IMAQ camera in software? Ideally I'd like the dummy camera grab frames from an AVI or a stack of JPEG's. Something like this must exist, I just can't find it on Google.
I'm looking for something that won't take very long (e.g.< 2 hours effort) and that is abstracted away through the standard LabVIEW IMAQ interface, so that my software won't know or care whether its dealing with a dummy camera or an actual camera.
You can try this method using LabVIEW classes:
Hardware Emulation Using LabVIEW Classes
If you have the IMAQdx driver, you might consider just buying a cheap USB webcam for $10.
Use the IMAQdx driver (assuming you have it), and then insert the Vision Acquisition Express VI, and you can choose AVIs or even pics as a source.
Something like this: GigESim is a camera emulation software. Unfortunately it is proprietary and too expensive (>$500) for my own needs, but perhaps others will find this link useful.
Anyone know of a viable Open Source alternative?
There's an IP Camera emulator project that emulates IP camera with python. I haven't used it myself so i don't know if it can be used by IMAQ.
Let us know if it's good for you.
I know this question is really old, but hopefully this answer helps someone out.
IMAQdx also works with Windows DirectShow devices. While normally these are actual physical capture devices (think USB Webcams), there is no necessity that they have to be.
There are a few different pre-made options available on the web. I found using Open Broadcaster Studio and this Virtual Cam plugin to be easy enough. Basically:
Download and install both.
Load your media sources in the sources list.
Enable the VirtualCam stream (Tools > VirtualCam). Press Start.
I recently started to learn how to use openCL to speed up some part of my code. So far the speed gain is impressive. In one case the code ran up to 50X faster than on the CPU. However I wonder if can start using this code in a production environnement. The reason is that the first time that I tried to run the example code, nothing worked. I was able to make it run by downloading the driver on the Nvidia openCL SDK download page (I have a Geforce GTX260). It gave me a blue during installation but after that I was able to run the example program and create my own code.
Does the fact that it didn't work "out of the box" for me mean that the mainstream drivers does not yet support it, despite the fact that it is specifically written that it does on the driver download page? What about ATI support? Will everyone have to download the special driver that gave me a blue screen on install?
In short, is openCL ready for production code?
If someone can give me some details, I'd like to know. Does anyone has been able to run a simple program on a number of different device without installing anything SDK related?
You may find an accurate answer on the OpenCL forums on the Khronos Group message boards. The OpenCL work group hangs out there regularly.
Does anyone has been able to run a
simple program on a number of
different device without installing
anything SDK related?
Nop. For instance, on ATI's GPUs end-users need to install ATI Stream SDK in order to run OpenCL code (just having an up-to-date graphics driver is not sufficient).
You may want to consider trying DirectCompute (Microsoft's version of GPU programming) or doing your OpenCL work on a Snow Leopard Mac. Those are the two ways (that I know of) that you can deliver a GPU programming solution to another user without any driver or other installation hassle.