Object Detection with moving camera - opencv

I understand that with a moving object and a stationary camera, it is easy to detect objects by subtracting the previous and current camera frames. It is also possible to detect moving objects when the camera is moving freely around the scene.
But is it possible to detect stationary objects with a camera rotating around the object? The movement of the camera is predefined and the camera is only restricted to the specified path around the object.

Try camshift demo which locates in opencv source code with this path: samples/cpp/camshiftdemo.cpp. Or other algorithms like meanshift,KCF,etc. These are all object tracking algorithms.

Related

3D objects keep moving ARKit

I am working on an AR app for which I am placing one 3D model in front of the device without horizontal surface detection.
Based on this 3D model's transform, I creating ARAnchor object. ARAnchor objects are useful to track real world objects and 3D objects in ARKit.
Code to place ARAnchor:
ARAnchor* anchor = [[ARAnchor alloc] initWithTransform:3dModel.simdTransform]; // simd transform of the 3D model
[self.sceneView.session addAnchor:anchor];
Issue:
Sometimes, I found that the 3D model starts moving in random direction without stopping.
Questions:
Is my code to create ARAnchor is correct? If no, what is the correct way to create an anchor?
Are there any known problems with ARKit where objects starts moving? If yes, is there a way to fix it?
I would appreciate any suggestions and thoughts on this topic.
EDIT:
I am placing the 3D object when the AR tracking state in normal. The 3D object is placed (without horizontal surface detection) when the user taps on the screen. As soon as the 3D model is placed, the model starts moving without stopping, even if the device is not moving.
You don't need an ARAnchor in fact, just set the position of the 3D object in front of the user.
If the surface is not enough to determine a position, the object won’t attached to the surface. Find a plane with more textures and try again.

Speed Tracking a moving object from another moving object

I am new to computer vision, and need some advice on where to start.
The project is to estimate speed of a moving object(A) relative to the moving object(B) which is tracking it(A).
what should I need to do if I assume-
if the background is appeared to be static(making the background single colored)
if the background is moving (harder)
I want to do this using opencv and c++
Any advice on where to start, general steps would be very appreciated. Thanks in advance!
If your camera is attached to object B, first you will have to design an algorithm to detect and track object A. A simplified algorithm can be:
Loop the steps below:
Capture video frame from the camera.
If object A was not in the previous frame, detect object A (manual initialisation, detection using known features, etc.). Otherwise, track the object using the previous position and a tracking algorithm (openCV offers quite a few).
Detect and record the current location of the object in image coordinates.
Convert the location to real world coordinates.
If previous locations and timestamps for the object were available, calculate its speed.
The best way to do this is to get started with at least a simple C++ program that captures frames from a camera, and keep adding steps for detection and tracking.

open cv and c++ object detection real time

hi i use a open cv for detect object and without problem >>
but the problem when i move the camera every think is detected because i detect without color with real time how can i recognize if the object moving or the camera i thinking about this and found some idea its
.........
first add point on center of image (the image come from video)
and when i check for moving object if its distance didnt change so its didnt move and the moving its from camera did my idea good and how to add object to or poit to image
I assume you would like to tell whether the object is moving or the camera. When there is only one camera, the solutions are usually using a reference (not-moving) object or use a mechanic sensor for camera movement. If you use two camera, you can usually calibrate them and use stereo vision formulations to solve the problem.

selecting 3D world points to process a camera calibration

I have 2 images for the same object from different views. I want to form a camera calibration, but from what I read so far I need to have a 3D world points to get the camera matrix.
I am stuck at this step, who can explain it to me
Popular camera calibration methods use 2D-3D point correspondences to determine the projective properties (intrinsic parameters) and the pose of a camera (extrinsic parameters). The most simple approach is the Direct Linear Transformation (DLT).
You might have seen, that often planar chessboards are used for camera calibrations. The 3D coordinates of it's corners can be chosen by the user itself. Many people choose the chessboard being in x-y plane [x,y,0]'. However, the 3D coordinates need to be consistent.
Coming back to your object: Span your own 3D coordinate system over the object and find at least six spots, from which you can determine easy their 3D position. Once you have that, you have to find their corresponding 2D positions (pixel) in your two images.
There are complete examples in OpenCV. Maybe you get a better picture when reading the code.

two images with camera position and angle to 3d data?

Suppose I've got two images taken by the same camera. I know the 3d position of the camera and the 3d angle of the camera when each picture was taken. I want to extract some 3d data from the images on the portion of them that overlaps. It seems that OpenCV could help me solve this problem, but I can't seem to find where my camera position and angle would be used in their method stack. Help? Is there some other C library that would be more helpful? I don't even know what keywords to search for on the web. What's the technical term for overlapping image content?
You need to learn a little more about camera geometry, and stereo rig geometry. Unless your camera was mounted on a special rig, it's rather doubtful that its pose at each image can be specified with just an angle and a point. Rather, you'd need three angles (e.g. roll, pitch, yaw). Plus, if you want your reconstruction to be metrical accurate, you need to calibrate accurately the focal length of the camera (at a minimum).

Resources