How to fix aspect ratio when embedding YouTube video? - youtube

How can I fix aspect ratio of a video I'm embedding? (Emphasis on the word embedding, I'm not owner of the video.)

Related

Swift iOS Crop Video in Real Time

There are videos being recorded in a 16:9 ratio and uploaded to s3 and then download to multiple devices (Desktop, Tablets and Phone). Playback of the video that occurs on the on iOS should ratio of 9:16.
My goal is to crop the video playback in real-time to a 9:16, cutting off the outer edges, but also enlarging it if need be. What is the fastest and most efficient way of accomplishing this with Swift?
My concern is CPU overhead doing this on the phone.

Is it possible to detect if an image has been resized without maintaining aspect ratio?

Say we have some images (from photographs) in some aspect ratio, like 16:9, that may have been resized and downsampled from another aspect ratio, like 4:3. Then the image would be squished along its height dimension. Or resized from one arbitrary shape to another, where the ratio of the long to short edge has been squished. Maybe another way to say this is that one spatial dimension has been downsampled to a greater extent than another.
I'm curious if there's a basic way to take an educated guess about whether an axis of an image has been squished. My intuition about this is that it might be possible by checking the Fourier transform of the image, because the frequencies along the image axis that was "squished" would be shifted higher than before the transform.
Is this a reasonable intuition, and what would the implementation look like?

Reverse Engineering Google Cardboard

I'm building a 360 video viewer for iOS in order to better understand the nuances of monoscopic and stereoscopic 360 video. Effectively, I'm trying to reverse engineer what the Google Cardboard SDK does.
My in-progress code is here: https://github.com/alfiehanssen/ThreeSixtyPlayer
I understand monoscopic video and the basics of stereoscopic video. I would like to better understand the client-side configuration/transformations required to display stereoscopic video.
Specifically, in two parts:
Cardboard can display monoscopic video files in stereoscopic mode. Although the same exact video is being presented to each eye, each eye's video view is clearly different. What transformations are being made to each eye to get this to work?
For stereoscopic videos (let's assume top/bottom layout), it also appears that transformations are applied to each eye's video view. What are the transformations being applied to each eye's video view?
It looks as though the video is being skewed. There are black masks applied to all sides of each eye's video, where are these coming from / are they the result of transformations?
A sample screenshot from a cardboard view:

OpenCV: GoPro video editing blur

I am attempting to post-process a video in OpenCV. The problem is that the GoPro video is very blurry, even with a high frame rate.
Is there any way that I can remove blur? I've heard about deinterlacing, but don't know if this applies to a GoPro 3+, or where even to begin.
Any help is appreciated.
You can record at a high frame rate to remove any blur, also make sure you are recording with enough natural light, so recording outdoors is recommended.
Look at this video: https://www.youtube.com/watch?v=-nU2_ERC_oE
At 30 fps, there is some blur in the car, but at 60fps the blur is non existant, just doubling the FPS can do some good. Since you have a HERO3+ you can record 720p 60 and that will remove the blur. WVGA 120fps can also do some justice (example is 1080p but still applies)

OpenCV perspectiveTransform correct target width and height?

I am currently working with OpenCV and its perspective transformation functions. I'd like to find a way to accurately determine the target rectangle based on the data (the source image) I have.
I already found this thread: https://stackoverflow.com/questions/7199116/perspective-transform-how-to-find-a-target-width-and-height
It states, that it is not possible to determine the correct aspect ratio correctly on the data contained in the source, but is there at least a good algorithm to get a good estimate?
No, there isn't a way to do it from just the image alone. Imagine you were taking a picture of an A4 sheet of paper resting on a table, only you were looking at it near-on horizontal. If you used the aspect ratio from the image, you'd end up with a really long, thin rectangle.
However, if you know the pose of the camera relative to the target (ie rotation matrix) and the camera intrinsic parameters, then you can get the aspect ratio.
Have a look at this paper (it's actually really interesting, though the English isn't the best): equation (20) is the key one. Also, look at this blog post where someone's implemented the approach.
If you don't know the orientation of the camera then the best bet is to put in some sort of aspect ratio that is at least ballpark. If you have any other info about the rectangle, use that (for example if I was always taking photos of A[0,1,2,...] pieces of paper, these have a known fixed aspect ratio).
good luck!

Resources