What approach to reading a panel of LEDs with machine vision - image-processing

I have a panel of LEDs on my home heating system that I would like to monitor, non-invasively, using a web cam.
I can periodically take a photo of the LEDs and I would like to interpret the photos to determine which of my heating zones are active.
What software packages or solutions should I look at for identifying when each of the 6 LEDs is off or on? Ideally I would like to have some tolerance built in if the webcam were to move very slightly. Looking for packages with approachable documentation, or a tool that might help build a model.
Example image

Related

Positioning system for 1:8 scale RC cars with millimeter accuracy

I am looking for a indoor positioning/2d motion tracking system for small robot cars (1:8 scale RC cars). We want to use the system as a ground truth for the development of autonomous driving applications, so looking for an accuracy of a few millimeters. The testing area is around 10x10m.
The cars are running ROS (Robotic Operating System), so an existing implementation would be nice.
A known solution is a OptiTrack motion capture system, but with a cost of >10kEuro this is way above our budget of around 1.5kEuro. I have also been looking in using HTC Vive trackers with OpenVR, but I am not sure if this is a reliable solution.
Any idea would be very welcome!

BPM detection options on iOS

I have scoured the net for resources on BPM detection for iOS, tried to implement various techniques and link to various libraries etc. but I just have issues either with build errors or with bpm detection not working.
What are the viable options for basic BPM detection on iOS? It doesn't have to be highly accurate with onset positions, but rather just detect the BPM for a series of audio buffers.
I tried VAMP but cannot get it to run on iOS, Ive tried various c++ options but none of them work.
Are there any MIT licensed BPM detection algorithms that integrate easily with iOS, or any commercial options that don't cost loads because its for a full audio library. I would like to detect BPM from a file not through the microphone.
I would just like a BPM detector class as I don't have the time to learn and implement one myself at this point in time.
Any help will be greatly appreciated.

Using OpenCv with no image proccessing background to detect objects on a pavement to avoid them

I'm a Software Engineering student in my last year in a 4-year bachelor degree program, I'm required to work on a graduation project of my own choice.
we are trying to find a way to notify the user of any thing the gets on his/her way while walking, this will be implemented as an android application so we have the ability to use the camera, we thought of Image processing and computer vision but neither me or any of my group members have any Image processing background, we searched a little bit and we found out about OpenCv.
So my question is do I need any special background to deal with OpenCv? and is it a good choice for the objective of my project to use computer vision, if not what alternatives do u advise me to use?
I appreciate your help.. thanks in advance!
At the first glance I would use 2 standard cameras to find depth image - stereo vision (similar to MS Kinect depth sensor)
from that it would be easy to fix a threshold to some distance.
Those algorithms are very CPU hungry so I do not think it will work on Android (although I have zero experience).
I you must use Android, I would look for some depth sensor (to avoid extracting depth data from 2 images)
For prototyping I would use MATLAB (or Octave), then I would switch to OpenCV (pointers, mem. allocations, blah...)

Image Processing - Beaglebone vs Raspberry Pi

I've been researching for a while and found tons of helpful resources on this subject, but I figured I would lay down my specifications here so I can get some recommendations from people experienced in this area. It seems like Beaglebone and Raspberry Pi with a Logitech or Microsoft camera are my best options at this point.
My target speed is 50 fps (20 ms per image) with the processing involved. From what I've looked at, this doesn't seem feasible considering most webcams don't go much past 30 fps. More specifically, I need to take the endpoints of an object (like a sheet of paper) and calculate where the midpoint is. Nothing incredibly fancy. 1080p isn't a requirement, I can most likely go much lower. Python is preferable over C and C++ since I've already done a lot of image processing with Python.
It looks like a lot of the code I'll be needing is mostly open-source already, so I really just need to figure out what controller/camera combo I should be using.
It's still a bit of a toss up between the two however here are my views.
The BBB will use a USB web cam and that will take a certain amount of processing power just to get the image. After that you can then manipulate it with SimpleCV
The RPi has a camera board that they say will only use < 3% of the cpu and the rest can be used for processing your image. Plus you can over-clock the RPi to 1Ghz.
Using the RPi with a basic webcam does not give a very good result, whereas the RPi camera works directly on the CSI bus and is set to do 1080 dpi natively. Plus they now have drivers for the camera that work with SimpleCV too.
IMHO I would say that the RPi B and Camera board would be technically faster that the BBB, but it also depends on what manipulation you plan to do :
Marc

OpenCV + Webcam compatibility

For the people that have experience with OpenCV, are there any webcams that don't work with OpenCV.
I am looking into the feasibility of a project and I know I am going to need a high quality feed (1080p), so I am going to need a webcam that is capable of that. So does OpenCV have problems with certain cameras?
To be analysing a video feed of that resolution on the fly I am going to need a fast processor, I know this, but will I need a machine that is not consumer available...ie, will an i7 do?
Thanks.
On Linux, if it's supported by v4l2, it is probably going to work (e.g., my home webcam isn't listed, but it's v4l2 compatible and works out of the box). You can always use the camera manufacturer's driver to acquire frames, and feed them to your OpenCV code. You can even sub-class the VideoCapture class, and implement your camera driver to make it work seamlessly with OpenCV.
I would think the latest i7 series should work just fine. You may want to also check out Intel's IPP library for more optimized routines. IPP also easily integrates into OpenCV code since OpenCV was an Intel project at its inception.
If you need really fast image processing, you might want to consider adding a high performance GPU to the box, so that you have that option available to you.
Unfortunately, the page that I'm about to reference doesn't exist anymore. OpenCV evolved a lot since I first wrote this answer in 2011 and it's difficult for them to keep track of which cameras in the market are supported by OpenCV.
Anyway, here is the old list of supported cameras organized by Operating System (this list was available until the beginning of 2013).
It depends if your camera is supported by OpenCV, mainly by the driver model that your camera is using.
Quote from Getting Started with OpenCV capturing,
Currently two camera interfaces can be used on Windows: Video for Windows (VFW) and Matrox Imaging Library (MIL) and two on Linux: Video for Linux(V4L) and IEEE1394. For the latter there exists two implemented interfaces (CvCaptureCAM_DC1394_CPP and CvCapture_DC1394V2).
So if your camera is VFW or MIL compliant under Windows or suits into standard V4L or IEEE1394 driver model, then probably it will work.
But if not, like mevatron says, you can even sub-class the VideoCapture class, and implement your camera driver to make it work seamlessly with OpenCV.

Resources