Speed Tracking a moving object from another moving object - opencv

I am new to computer vision, and need some advice on where to start.
The project is to estimate speed of a moving object(A) relative to the moving object(B) which is tracking it(A).
what should I need to do if I assume-
if the background is appeared to be static(making the background single colored)
if the background is moving (harder)
I want to do this using opencv and c++
Any advice on where to start, general steps would be very appreciated. Thanks in advance!

If your camera is attached to object B, first you will have to design an algorithm to detect and track object A. A simplified algorithm can be:
Loop the steps below:
Capture video frame from the camera.
If object A was not in the previous frame, detect object A (manual initialisation, detection using known features, etc.). Otherwise, track the object using the previous position and a tracking algorithm (openCV offers quite a few).
Detect and record the current location of the object in image coordinates.
Convert the location to real world coordinates.
If previous locations and timestamps for the object were available, calculate its speed.
The best way to do this is to get started with at least a simple C++ program that captures frames from a camera, and keep adding steps for detection and tracking.

Related

What are limitations for scanning and detecting 3d object in ARKit2.0 in iOS?

I am done with 3d object scanning and detection with ARKit 2.0. I have scanned 3d object from all sides of object. Once 100% scanning is done then had given name to that object and then save that ARReference Object and image in document directory. Then on button click I am going to detect scanned object and display it’s name and image from document directory.
Object get detected but it’s taking too much time to detect an object. I have gone through Apple document for best practices and limitations. Still having some questions regarding ARKit.
Is anything wrong while scanning or detecting object? What are best practices to scan 3d object?
What are the limitations for scanning and detecting object?
Is it possible to zoom while detecting object?
What are best practices to detect object quickly i.e. not taking too much time for detection?
ARKit engineers give the following recommendation for scanning 3D objects:
Light the object with an illuminance of 250 to 400 lux, and ensure that it’s well-lit from all sides.
Provide a light temperature of around ~6500 Kelvin (D65) – similar with daylight. Avoid warm or any other coloured light sources.
Set the object in front of a matte, middle-grey background.
For easy object scanning, use a recent, high-performance iOS device (iPhone X/Xs/Xr, iPad Pro). Scanned objects can be detected on any ARKit-supported device, but the process of creating a high-quality scan is faster and smoother on a high-performance device.
Position the object you want to scan on a surface free of other objects (like an empty tabletop).
Also, I should add four things:
Objects with non-repetitive (unlike polkadots) and non-flat textures are more preferable. Scanning objects with a "not-rich" texture takes a little longer.
Try not to scan transparent objects like a glass statuette or jar of water. For ARKit these kinds of objects are undesirable. It doesn't matter what Index of Refraction (IOR) they have 1.0 or 3.0.
Try not to scan highly reflective objects like mirror or chrome sphere. For ARKit these types of objects are undesirable too. Their "texture" depends on angle of view.
Try not to scan objects with a chromatic dispersion effect like surface of DVD or precious stones in jewelry.
Using zoom when scanning is a controversial issue.
The most robust scenario for me for ARObjectScanningConfiguration is to scan a middle-sized object 0.5 to 1.5 meters away. In ARKit Autofocus is enabled by default.
All aforementioned recommendations are general. Every object is unique and you need a different amount of time for any unique object to scan.
Hope this helps.

Can ARCore track moving surfaces?

ARCore can track static surfaces according to its documentation, but doesn't mention anything about moving surfaces, so I'm wondering if ARCore can track flat surfaces (of course, with enough feature points) that can move around.
Yes, you definitely can track moving surfaces and moving objects in ARCore.
If you track static surface using ARCore – the resulted features are mainly suitable for so-called Camera Tracking. If you track moving object/surface – the resulted features are mostly suitable for Object Tracking.
You also can mask moving/not-moving parts of the image and, of course, inverse Six-Degrees-Of-Freedom (translate xyz and rotate xyz) camera transform.
Watch this video to find out how they succeeded.
Yes, ARCore tracks feature points, estimates surfaces, and also allows access to the image data from the camera, so custom computer vision algorithms can be written as well.
I guess it should be possible theoretically.
However, Ive tested it with some stuff in my HOUSE (running S8 and an app with unity and arcore)
and the problem is more or less that it refuses to even start tracking movable things like books and plates etc:
due to the feature points of the surrounding floor etc it always picks up on those first.
Edit: did some more testing and i Managed to get it to track a bed sheet, it does However not adjust to any movement. Meaning as of now the plane stays fixed allthough i saw some wobbling but i guess that Was because it tried to adjust the Positioning of the plane once it's original Feature points where moved.

open cv and c++ object detection real time

hi i use a open cv for detect object and without problem >>
but the problem when i move the camera every think is detected because i detect without color with real time how can i recognize if the object moving or the camera i thinking about this and found some idea its
.........
first add point on center of image (the image come from video)
and when i check for moving object if its distance didnt change so its didnt move and the moving its from camera did my idea good and how to add object to or poit to image
I assume you would like to tell whether the object is moving or the camera. When there is only one camera, the solutions are usually using a reference (not-moving) object or use a mechanic sensor for camera movement. If you use two camera, you can usually calibrate them and use stereo vision formulations to solve the problem.

Algorithms for Tracking moving objects with a moving camera

I'm trying to develop an algorithm for real time tracking moving objects with a single moving camera setup as a project, in OpenCV (C++).
My basic objectives are
Detect motion in an (initially) static frame
Track that moving object (camera to follow that object)
Here is what I have tried already
Salient motion detection using temporal differencing and Optical Flow. (does not compensate for a moving camera)
KLT based feature tracking, but I was not able to segment the moving object features (moving object features got mixed with other trackable features in the image)
Mean shift based tracking (required initialization and is a bit computationally expensive)
I'm now trying to look into the following methods
Histogram of Gradients.
Algorithms that implement camera motion parameters.
Any advice on which direction should I proceed forward to acheive my objective.
Type: 'zdenek kalal predator' to google.com and watch the videos, read the papers that came up. I think it will give you a lot of insight.

Object tracking in OpenCV

I had been using LK algorithm in detecting corners and interested point for tracking.
However, I am stucked at this point where I need to have something like a rectangle box to follow the tracked object. All I have now was just a lot of points showing my moving objects.
Is there any methods or suggestions for that? Also, any idea on adding counter into the window so that my object moving in and out the screen can be counted as well?
Thank you
There are lots of options! Within OpenCV, I'd suggest using CamShift as a starting point, since it is a relatively easy to use. CamShift uses mean shift to iteratively search for an object in consecutive frames.
Note that you need to seed the tracker with some kind of input. You could have the user draw a rectangle around the object, or use a detector to get the initial input. If you want to track faces, for example, OpenCV has a cascade classifier and training data for a face detector included.

Resources