Smooth move for camera in three.js - webgl

I'm trying to modify panorama equirectangula player ( https://github.com/mrdoob/three.js/blob/master/examples/webgl_panorama_equirectangular.html ) to add some smooth move to camera. Is there any chance to make it move like in cube example ( https://github.com/mrdoob/three.js/blob/master/examples/canvas_geometry_cube.html ), without breaking move on mousedown but smooth fade out?

Javascript raw mouse input event rate is too low for smooth updating. I assume mouse input events are generated on much lower frequency than your FPS.
The solution would interpolate between the mouse input events. First Person Shooters call this mouse smoothing. Filter incoming events, interpolate using a spline, add momentum (a.k.a. iPhone style scrolling), etc.

Related

Stitching a moving object which is partially visible in view of stationary camera

I am trying to stitch a moving car, which is not completely visible horizontally, ( although completely visible vertically ) from camera view port.
Camera is stationary, 1-2 meters apart from the moving object ( similar to a gate setup ), taking side view of car. But since Camera is too close it can only capture a part of car.
I tried stitching multiple frames using this tutorial. But this only works when camera itself is rotated around axis or moved. Otherwise, because of background features it places frames onto each other since background is same in each frame ( see tutorial for reference ).
Basically what i'm trying to achieve is, given a video clip of moving car (such that complete car is never in one frame), build image of complete car by stitching the video frames.
Any algorithm, library or reference would be helpful.
Thanks for your time!

What method should I use to track a moving object with a moving camera (using resources of RaspberryPi)

I'm playing around with motion detection through a webcam connected to RaspberryPi using OpenCV and cvBlob in C++. I want to kick it up a notch and make a robot that detects and tracks movement driving towards it and turning left/ring to keep the moving object in the center of view.
But I quickly hit a roadblock - I cannot find any materials about motion tracking with an active, moving camera that are more on an amateur level. I found only academic papers e.g. on optical flow. Sure, I can try to get through one of them, if I knew that's the algorithm that suits my needs, but going through all the papers and choosing the one among them is beyond my level of understanding.
So I would be grateful, If someone could point me to the simplest possible method (after all, RaspberryPi has quite limited resources) that would allow me to determine if the selected blob (I plan to track the movement of the biggest blob exceeding a set size) moves on the horizontal axis, compared to the movement of the background caused by the movement of the robot on which the camera is mounted The movement in the vertical axis is irrelevant in this application.
If you use the the left and right x co-ordinates of the blob you should be able to determine if the object is moving by measuring the distance from the left and right image borders to the objects left and right x co-ordinates. If the robot is moving left or right, the object would have stopped if the distance of the measurement starts to decrees.

iOS Camera Color Recognition in Real Time: Tracking a Ball

I have been looking for a bit and know that people are able to track faces with core image and openGL. However I am not that sure where to start the process of tracking a colored ball with the iOS camera.
Once I have a lead to being able to track the ball. I hope to create something to detect. when the ball changes directions.
Sorry I don't have source code, but I am unsure where to even start.
The key point is image preprocessing and filtering. You can use the Camera API-s to get the video stream from the camera. Take a snapshot picture from it, then you should use a Gaussian-blur on it (spatial enhance), then a Luminance Average Threshold Filter (to make black and white image). After that a morphological preprocessing should be wise (opening, closing operators), to hide the small noises. Then an Edge detection algorithm (with for example a Prewitt-operator). After these processes only the edges remain, your ball should be a circle (when the recording environment was ideal) After that you can use a Hough-transform to find the center of the ball. You should record the ball position and in the next frame, the small part of the picture can be processed (around the ball only).
Other keyword could be: blob detection
A fast library for image processing (on GPU with openGL) is Brad Larsons: GPUImage library https://github.com/BradLarson/GPUImage
It implements all the needed filter (except Hough-transformation)
The tracking process can be defined as following:
Having the initial coordinate and dimensions of an object with a given visual characteristics (image features)
In the next video frame, find the same visual characteristics near the coordinate of the last frame.
Near means considering basic transformations related to the last frame:
translation in each direction;
scale;
rotation;
The variation of these tranformations are strictly related with the frame rate. Higher the frame rate, nearest the position will be in the next frame.
Marvin Framework provides plug-ins and examples to perform this task. It's not compatible with iOs yet. However, it is open source and I think you can port the source code easily.
This video demonstrates some tracking features, starting at 1:10.

XNA value of Apply3D positions

I'm currently working on 3d positional audio in my 3d XNA game (using SoundEffectInstance), however I'm having troubles finding the correct values of the position of the emitter and listers.
I started out setting the position of my listener to the camera position(it's a first person game), and the position of the various emitters to the position of the object that was emitting the sound. Doing this muted the sound completely, compared to before I used the Apply3D method.
I did some experimenting with the values, and figured after I made the values of the positions much much smaller, I started hearing the sound. My map size has values from 0 to 5000 in the x/z plane (only moving between 0 and ~500 on y axis), so the distance between the listener and emitter will generally be high (when comparing to the values I needed to hear anything at all which was between 0 and 1).
Is there any way to control what "close" and "far away" is for the soundEffectInstance? Or am I supposed to normalize the distance values? I read trough several guides on 3D sound, but I have not seen anything related to normalizing or control of the distance values.
After some more testing, I found that simply dividing the position values with a factor (arround 500 seems right for me) provides a good result. I dont know if this is the way It's supposed to be done, but it seems to be working fine.

How to simulate a shaky cam with opencv?

I'm trying to simulate a shaky cam in a static video. I could choose a couple of points randomly and then pan/zoom/warp using easing, but I was wondering if there's a better, more standard way.
A shaky camera will usually not include zooming. The image rotation component would also be very small, and can probably be ignored. You can probably get sufficient results with 2D translation only.
What you should probably do is define your shake path in time - the amount of image motion from the original static video for each frame - and then shift each frame by this amount.
You might want to crop your video a bit to hide any blank parts near the image border, remaining blank regions may be filled using in-painting. This path should be relatively smooth
and not completely random jitter since you are simulating physical hand motion.
To make the effect more convincing, you should also add motion-blur.
The direction of this blur is the same as the shake-path, and the amount is based on the current shake speed.

Resources