Video image analysis - Detect fast movement / Ignore slow movement - ios

I am looking to capture video on an iPhone and to initiate capture once fast motion is identified and to stop when there is slow motion or no motion is detected.
Here is a use case to illustrate:
If someone is holding the iPhone camera and there is no background movement, but his hands are not steady and moving left/right/up/down slowly, this movement should be considered slow.
If someone runs into the camera field of view quickly, this would be considered fast movement for recording.
If someone slowly walks into the camera field of view, this would be considered slow and shouldn't be picked up.
I was considering OpenCV and thought it maybe overkill using their motion detection and optical flow algorithms. I am thinking of a lightweight method by accessing the image pixel directly, perhaps examining changes in luminosity/brightness levels.
I only need to process 30-40% of the video frame area for motion (e.g. top half of screen), and can perhaps pick up every other pixel to process. The reason for a lightweight algorithm is because it will need to be very fast < 4ms as it will be processing incoming video buffer frames at a high frame rate.
Appreciate any thoughts into alternative image processing / fast motion detection routines by examining image pixels directly.

dense optical flow like calcOpticalFlowFarneback
using motion history
2.1 updateMotionHistory(silh, mhi, timestamp, MHI_DURATION);
2.2 calcMotionGradient(mhi, mask, orient, MAX_TIME_DELTA, MIN_TIME_DELTA...
2.3 segmentMotion(mhi, segmask, regions, timestamp, MAX_TIME_DELTA);
2.4 calcGlobalOrientation(orient_roi, mask_roi, mhi_roi, ...

Related

Reduce motion blur for fast moving ball

I am trying to create a simple ball tracking system. It does not have to be perfect as it is not for commercial/production.
On the hardware side I am using RaspberryPI with RPI camera V2. On the software - OpenCV.
If there is a natural sun light (even if there are some clouds) ball is totally visible. But when the sun is gone and there is only artificial lights there is a big motion blur visible on the ball.
On the picture below top object is ball with artificial light and bottom one is with natural light.
This is rather obvious - less light - longer exposure and with combination of rolling shutter we get motion blur.
I tried all the settings (like exposure sports/night mode) on this camera but I think it is just hardware limitations. I would like to improve this and reduce motion blur. I would need to get different camera that would handle this better but I have really poor knowledge about camera sensors, parameters etc. I cannot afford to buy many cameras, compare them and then select the best one. So my question is - which camera model (compatible with RPI) should I pick or which parameters should I look for to get better results? (less motion blur)
Thanks in advance!
EDIT: e.g. would global shutter reduce the problem? (cam like: ArduCam OV2311 2Mpx Global Shutter)
EDIT2: Maybe slower shutterspeed would change something but I also need good fps (20-30) does it "collide"?
EDIT3: Here I read that maybe using night (NoIR) camera would help since it is more light sensitive.
Regards
In order to reduce the motion blur, you have to use faster shutter speed namely reducing the exposure time, sometimes combined with extra illuminators.
For the Raspberry pi, you have to disable autoexposure and set it manually with shutter speed value.
Hint:
Global shutter camera doesn't help motion blur, it only helps with the rolling shutter artifacts. It still needs very fast shutter speed to avoid motion blur.
Fps doesn't have something to do with the shutter speed, it only is limited by the read out speed from the sensor.
NoIR might not help as well, because it still need the strong illumination for faster shutter speed.

iOS Panorama UI

I am trying to create a Panorama app for iPhone/iPad.
The image stitching bit is OK, I'm using openCV libraries and the results are pretty acceptable.
But I'm a bit stuck on developing the UI for assisting the user while capturing the panorama.
Most apps (even on Android) would provide user with some sort of a marker that translates/rotates exactly matching the movement of the user's camera.
[I'm using the iOS 7 - default camera's panorama feature as a preliminary benchmark].
However, I'm way off the mark till date.
What I've tried:
I've tried using the accelerometer and gyro data for tracking the marker. With this approach -
I've applied an LPF on the accelerometer data and used simple
Newtonian mechanics (with a carefully tuned damping factor) to
translate the marker on the screen. Problem with this approach: very erratic data. Marker tends to jump and wobble between points. Hard to tell between smooth movement and jerk.
I've tried using a complimentary filter between LPF-ed gyro and
accelerometer data to translate the blob. Problem with this approach: Slightly better than the first approach, but still quite random.
I've also tried using image processing to compute optical flow. I'm
using openCV's
goodFeaturesToTrack(firstMat, cornersA, 30, 0.01, 30);
to get the trackable points from a first image (sampled from camera
picker) and then using calcOpticalFlowPyrLK to get the positions
of these points in the next image.
Problem with this approach: However, the motions vectors obtained from tracking these points are too noisy to compute the resultant
direction of motion accurately.
What I think I should do next:
Perhaps compute the DCT matrix from accelerometer and gyro data and
use some algorithm to filter one output with the other.
Work on the image processing algorithms, use some different techniques
(???).
Use Kalman filter to fuse the state prediction from
accelerometer+gyro with that of the image processing block.
The help that I need:
Can you suggest some easier way to get this job done?
If not, can you highlight any possible mistake in my approach? Does it really have to be this complicated?
Please help.

iOS Camera Color Recognition in Real Time: Tracking a Ball

I have been looking for a bit and know that people are able to track faces with core image and openGL. However I am not that sure where to start the process of tracking a colored ball with the iOS camera.
Once I have a lead to being able to track the ball. I hope to create something to detect. when the ball changes directions.
Sorry I don't have source code, but I am unsure where to even start.
The key point is image preprocessing and filtering. You can use the Camera API-s to get the video stream from the camera. Take a snapshot picture from it, then you should use a Gaussian-blur on it (spatial enhance), then a Luminance Average Threshold Filter (to make black and white image). After that a morphological preprocessing should be wise (opening, closing operators), to hide the small noises. Then an Edge detection algorithm (with for example a Prewitt-operator). After these processes only the edges remain, your ball should be a circle (when the recording environment was ideal) After that you can use a Hough-transform to find the center of the ball. You should record the ball position and in the next frame, the small part of the picture can be processed (around the ball only).
Other keyword could be: blob detection
A fast library for image processing (on GPU with openGL) is Brad Larsons: GPUImage library https://github.com/BradLarson/GPUImage
It implements all the needed filter (except Hough-transformation)
The tracking process can be defined as following:
Having the initial coordinate and dimensions of an object with a given visual characteristics (image features)
In the next video frame, find the same visual characteristics near the coordinate of the last frame.
Near means considering basic transformations related to the last frame:
translation in each direction;
scale;
rotation;
The variation of these tranformations are strictly related with the frame rate. Higher the frame rate, nearest the position will be in the next frame.
Marvin Framework provides plug-ins and examples to perform this task. It's not compatible with iOs yet. However, it is open source and I think you can port the source code easily.
This video demonstrates some tracking features, starting at 1:10.

How to simulate a shaky cam with opencv?

I'm trying to simulate a shaky cam in a static video. I could choose a couple of points randomly and then pan/zoom/warp using easing, but I was wondering if there's a better, more standard way.
A shaky camera will usually not include zooming. The image rotation component would also be very small, and can probably be ignored. You can probably get sufficient results with 2D translation only.
What you should probably do is define your shake path in time - the amount of image motion from the original static video for each frame - and then shift each frame by this amount.
You might want to crop your video a bit to hide any blank parts near the image border, remaining blank regions may be filled using in-painting. This path should be relatively smooth
and not completely random jitter since you are simulating physical hand motion.
To make the effect more convincing, you should also add motion-blur.
The direction of this blur is the same as the shake-path, and the amount is based on the current shake speed.

Rapid motion and object detection in opencv

How can we detect rapid motion and object simultaneously, let me give an example,....
suppose there is one soccer match video, and i want to detect position of each and every players with maximum accuracy.i was thinking about human detection but if we see soccer match video then there is nothing with human detection because we can consider human as objects.may be we can do this with blob detection but there are many problems with blobs like:-
1) I want to separate each and every player. so if players will collide then blob detection will not help. so there will problem to identify player separately
2) second will be problem of lights on stadium.
so is there any particular algorithm or method or library to do this..?
i've seen some research paper but not satisfied...so suggest anything related to this like any article,algorithm,library,any method, any research paper etc. and please all express your views in this.
For fast and reliable human detection, Dalal and Triggs' Histogram of Gradients is generally accepted as very good. Have you tried playing with that?
Since you mentioned rapid motion changes, are you worried about fast camera motion or fast player/ball motion?
You can do 2D or 3D video stabilization to fix camera motion (try the excellent Deshaker plugin for VirtualDub).
For fast player motion, background subtraction or other blob detection will definitely help. You can use that to get a rough kinematic estimate and use that as an estimate of your blur kernel. This can then be used to deblur the image chip containing the player.
You can do additional processing to establish identify based upon OCRing jersey numbers, etc.
You mentioned concern about lights on the stadium. Is the main issue that it will cast shadows? That can be dealt with by the HOG detector. Blob detection to get blur kernel should still work fine with the shadow.
If you have control over the camera, you may want to reduce exposure times to reduce blur. Denoising techniques can be used to reduce CCD noise that occurs with extreme low light and dense optical flow approaches align the frames and boost the signal back up to something reasonable via adding the denoised frames.

Resources