webcam access using opencv with processing in an applet or application - opencv

I made a small sketch with processing using opencv to face detect and track. But when I export as an application and start it nothing happens, same as an applet. First I exported the applet and thought this must be a security issue, and struggled with some hack for processing to sign your java applet.
http://processing.org/hacks/hacks:signapplet
With no luck I thought I would atleast be able to run it as an application but no success.
Anyone know what to do?

If you're on OSX 10.6 there is a known bug with the quicktime camera stuff not working in 64-bit mode. Make sure both your install of processing and the exported application are set to run in 32-bit mode.
Right click -> get info -> Open in 32-bit mode

Related

Electron Linux Video Hardware Acceleration

Im having a hard time getting hardware acceleration for videos working in electron running on Linux (ARM64) and Linux (Intel64). Im not sure if this is an issue with the flags electron is using for chromium or if its more an issue at drive level on the host machines. Or maybe it's just not possible. Both machines are running Chromium 95 snap 64 bit.
When running chromium (ARM64) without any flags and running chrome://gpu i get the following:
When running chromium (ARM64) with --enable-features=VaapiVideoDecoder i get the following:
This leads me to believe that when calling chrome with the flag hardware acceleration should be working. Just to add to the complexity of this if i go to youtube and check media it looks like it may still be disabled (even with the flags):
I have read through a number of articles titled 'how to enable hardware acceleration in electron'. Most of which list the following flags to provide:
app.commandLine.appendSwitch('ignore-gpu-blacklist')
app.commandLine.appendSwitch('enable-gpu-rasterization')
app.commandLine.appendSwitch('enable-accelerated-video')
app.commandLine.appendSwitch('enable-accelerated-video-decode')
app.commandLine.appendSwitch('use-gl', 'desktop')
app.commandLine.appendSwitch('enable-features', 'VaapiVideoDecoder')
I have tried all of these but nothing seems to make any difference. When running a video in electron it has the following properties:
Is anyone able to point me in the right direction with this? Thank you.
This has been solved. The main issue was the VaAPI driver needing to be installed on the hardware running the application. Secondly the only flags needed was the following:
app.commandLine.appendSwitch('enable-features', 'VaapiVideoDecoder')

Jmyron and Windows 8

I am running into hardware issues that perhaps someone here knows a workaround. I am using a PC and windows.
For several years I have been making interactive installations using video tracking: the Jmyron library in Processing, which has functioned marvelously for me. I use this set up: cctv type microcameras to a multiplexer, the I digitize this signal via a firewire cable to a pci card. Then Processing reads these quads (sometimes more) as a single window, and it has always worked (from windows xp all the way to 7). Then comes windows 8: Processing seems to prefer the built-in webcam to the firewire bus. On previous version of windows, the firewire bus would naturally override the webcam, provided I had first opened a video capture in Windows Maker, and then shut it down before running the Processing sketch. In Windows 7, which had no native video capture software, I used this great open source video editor called Capture Flux. The webcam never interfered. With Windows 8, no matter what I try, Processing defaults to the webcam, which for my purposes is useless. I have an exhibition coming up real soon, and there is no way I am going to have the time to rewrite all that code for Open CV or other newer libraries.
I am curious if anyone has had similar problems, found a work around? Is there a way of disabling the webcam in Windows 8 (temporarily of course, because I need it to be operational for other applications), or some other solution?
Thank you!
Try this:
type "windows icon+x" choose device manager (or use run/command line: "mmc devmgmt.msc")
look for imaganing devices, find your integrated webcamera
right click on it and choose disable - now processing should skip the device.
Repeat the steps to reenable the device.
Other solution would be using commands in processing:
println (Capture.list()); (google it on processing.org) this way you will get all avaliable devices and you can choose the particular one based on its name.
Hope this helps.

BB10 - Cascades Application - Console/Terminal Application

I am attempting to create a simple terminal application that runs on a BB10 device/simulator. I have gone through all of the available demo/example applications:
http://developer.blackberry.com/native/
I can't seem to find a way to have an application run as a console/terminal/tty-interface on the BB10 device I'm developing for. I was hoping to port some simple console games (ie: maybe a simple thing like "Hunt the Wumpus", http://en.wikipedia.org/wiki/Hunt_the_Wumpus), and then maybe take a crack at a Rogue or Nethack port as well (hopefully without having to depend on the ncurses library (http://en.wikipedia.org/wiki/Ncurses), but it's OK if I have to rely on ncurses. It just saves me from having to write additional interfacing code).
Can someone please provide a short, simple example of what I would need to write in a basic BB10 application that opens a black-and-white terminal with color support? It can be short, and just something I have to paste into an empty project.
Thank you in advance!
Here you go:
https://github.com/blackberry/NDK-Samples/tree/master/HelloWorldDisplay
It's listed near the bottom.

OpenCV does not open videos on other computers

I have the x64 project that works perfectly fine on my Windows 7 machine whether I run deployed version or in the Visual Studio 10. Now, I got a hand on 4 other machines with x64 Windows 7 platforms and tried to install on them, however those ones work fine except the bit of code which has to capture video always fail to load.
That bit is a typical openCV video capture bit:
cap = VideoCapture(file);
if (!cap.isOpened ())
{
cerr << "I have failed!" << endl;
return 0;
}
The file variable is generated when user chooses the file to load using file dialog it works perfectly fine on all machines if the file is, for instance, a picture, therefore that's not the reason.
Maybe the produced installer does not contain the necessary library or something like that. I really have no idea.
Cheers,
Vilius
Ok, As I suspected some libraries were missing in the installer package, I managed to load video files once I copied compiled OpenCV to another computer and added the path to find it.
Since many people had problems loading videos when ffmpeg was not configured, I added this library manually to the deployed software and it fixed the problem. Therefore, the problem was that Visual Studio was not adding opencv_ffmpeg241_64.dll library to the installer.
Cheers,
Vilius
If the installer packed everything correctly, you still have to take care of the codecs yourself.
Try to install a codec pack (K-Lite Codec Pack or something else) on those machines.
On the other machines, have you configured OpenCV with the ffmpeg option while installing it? Check the Cmake configuration list and print it out here if possible.
Also, it wouldn't hurt to check if you have the respective camera drivers installed correctly for those computers if you are capturing frames directly instead of from a file.

openCV from window to ios

I want to train openCV from a server and send the xml generated by openCV to an ios device where an app will recognize the face using the xml trainned by the server. I will use openCV in both app but the server has window (trainning) and the device has ios (recognition).
So my main question is very simple:
The xml generated in openCV window version can be used an openCV IOS version without any trouble? Somebody made something similar who can give me some tips?
In window I will use .Net.
I think they won't have trouble because they are same libraries (openCV), so I suppose they have same internal algorithms but I want to be sure before start the project.
Best Regards and thanks for your time
There is no problem, but you must train with images taken from your devices. It is normal to have multiple xml sets depending on your different cameras. Normally you release these with the binary, and not as a download but still...

Resources