youtube embed three.js for multiple textures - youtube

I'm trying to use this example ( http://threejs.org/examples/css3d_sandbox.html )
for adding a specifics youtube videos and webcam source as textures for planes.
Any quite specific code tips?
many thanks

Related

How to stabilize video with OpenCV?

I have a video which is taken with a moving camera and contains objects. I would like to stabilize the video, so that all objects will remain in the same position in the video feed.
How can I do this with OpenCV?
i.e. For example, if I have two images prev_frame and next_frame, how do I transform next_frame so the video camera appears stationary like in the next example.
example:
there are some answers here but most of them irrelevant anymore because the lack of code examples or methods and libraries that doesn't exist anymore.
thanks.
If you wish to do this from OpenCV, this article should be helpful. It includes code samples.
Doug

Can I feed ARKit facial capture pre-recorded videos?

I know a similar question has been asked before, but I have a very specific use case and thought I might put more details here.
I'm not an iOS dev, but very curious about the ARKit -- specifically want to test out the facial capture (tracking facial expressions) using ARKit ARFaceAnchor.
I want to know if it is possible to feed pre-recorded videos to ARKit instead of the camera feed.
I did find an article with sample code here about using "sensor replay".
https://github.com/ittybittyapps/ARRecorder/tree/master/ARRecorder
https://www.ittybittyapps.com/blog/2018-09-24-recording-arkit-sessions/
Unfortunately it is not possible ship an app this way.
I know that the facial capture doesn't necessarily require depth sensor data (I've tried it by holding up the camera to a pre-recorded face on a monitor).
So I'm curious to know if anyone knows of a way to feed a static, pre-recorded video to the ARKit?
Thanks in advance for the help :)

Multiple Transformations in a Merged Video iOS

Been working on with two different libraries: https://github.com/dev-labs-bg/swift-video-generator and https://github.com/Awalz/SwiftyCam.
These libraries provide the ability to record and instantly merge two different videos. When using the front facing (selfie) camera, I prefer the video to be mirrored (Snapchat style). It looks more normal. If I take two selfie videos and merge them, the video generator understands the preferredTransform, and using AVAssetWriter, correctly merges the videos together while keeping their mirrored appearance. Similarly, if there are two videos taken with the back camera, the generator understands the transform and merges the videos together.
However, if there is a selfie video taken (mirrored by Swifty Cam), and then it is merged with a video with the back camera, the generator doesn't understand how to make multiple transformations, and the merged video takes on the preferredTransform of the first video taken and flips one of the videos that shouldn't be flipped.
How do you deal with multiple transformations when merging video on iOS?
I would apply a CIFilter to the captured frames. It's fast (processing wise), fairly simple and there are tons of examples if you google.
Start off with having a look at Apples -> CIFunHouse
Then when you're up to speed, this kernel filter reverses the image.
kernel vec4 coreImageKernel(sampler image)
{
vec2 pixCoord=samplerCoord(image);
pixCoord.x=samplerSize(image).x-pixCoord.x;
return sample(image, pixCoord);
}
Or use the built in filter CIAffineTransform -> CIAffineTransform if you don't want to write your own stuff.
/Anders.

How to put a video camera feed in a 3d scene with a 3d sphere component

I am learning aframe VR framework, studied all the tutorials and examples I can find, but I could not find an example to call web camera and put a 3d component like a sphere in the camera video scene like pokemon go does, anybody has a pointer about this or can give me a clue?
There are a couple A-Frame components providing support for webcams:
https://github.com/flysonic10/aframe-passthrough
https://github.com/jesstelford/aframe-video-billboard
Both include instructions for getting started.

Reverse Engineering Google Cardboard

I'm building a 360 video viewer for iOS in order to better understand the nuances of monoscopic and stereoscopic 360 video. Effectively, I'm trying to reverse engineer what the Google Cardboard SDK does.
My in-progress code is here: https://github.com/alfiehanssen/ThreeSixtyPlayer
I understand monoscopic video and the basics of stereoscopic video. I would like to better understand the client-side configuration/transformations required to display stereoscopic video.
Specifically, in two parts:
Cardboard can display monoscopic video files in stereoscopic mode. Although the same exact video is being presented to each eye, each eye's video view is clearly different. What transformations are being made to each eye to get this to work?
For stereoscopic videos (let's assume top/bottom layout), it also appears that transformations are applied to each eye's video view. What are the transformations being applied to each eye's video view?
It looks as though the video is being skewed. There are black masks applied to all sides of each eye's video, where are these coming from / are they the result of transformations?
A sample screenshot from a cardboard view:

Resources