How to display and control full-screen video in an XBOX game - xna

I would like to create a game for XBox360 which is mostly full-screen HD videos. The player will be given choices during the game to determine which video is to be played.
I need very fine-grained control over the video such as controlling playback speed, seeking to video frames and possibly applying simple effects to the videos.
I also want to be able to use augmented reality to add elements to the videos so I need to be able to render 3d objects over the video.
It would be great if this could be done in XNA however there is only basic video playback functionality there. What other options do I have?

Your options for decoding videos are limited. The VideoPlayer class provides functionality for playing videos from the start, pausing and resuming them, looping them, and setting their audio volume.
As far as displaying videos goes - you have a huge degree of freedom. You basically get each frame of the video as a texture that you can draw as a sprite, or apply to any 3D object. This includes using it as an input to a pixel shader, allowing you to apply all kinds of effects to the video.
The only alternative to the built-in player is to create your own. If you want to target the Xbox 360 this will limit you to managed code only. I am not aware of any suitable video decoder libraries.
For Windows, a little Googling revealed this library, which may be a good starting point.

Related

Is it possible to record video and overlay with CALayer simultaneously using AVVideoCompositionCoreAnimationTool?

I'm currently working on app that records video of user, binds particle emitters to hands landmarks on preview. In result effects are shown on camera preview, but they aren't captured on video.
I saw a great tutorial on AVVideoCompositionCoreAnimationTool https://www.raywenderlich.com/6236502-avfoundation-tutorial-adding-overlays-and-animations-to-videos, but that way it only is possible to rendere animations on already recoded video.
I wonder if there is any chance to use AVVideoCompositionCoreAnimationTool for recording video form camera and animations in real time. Or if you know other method to do it, without diving deep in metal and so on.
Thanks in advance!

How should I implement video transitions with Metal Kit and AVFoundation?

I am making a video editor and so far I have been able to apply filters to a single frame. The way I currently have everything set up works perfectly. It's a lot of code to show and I don't really need help with the code specifically, so I'll just explain what I do. I use a video composition, WITHOUT a custom compositor that has one AVMutableVideoCompositionInstruction, but multiple AVCompositionTracks (one for each asset) in my AVMutableComposition. Each track has its own layer instructions that handle scale, orientation, and position for each video. I then extract frames using a player item video output and render the frames with metal to apply filters and effects. This works really well and the performance is great.
Now I am faced with applying transitions, which requires me to overlap tracks in the AVMutableComposition. The problem with this is that the video output can't extract frames from specific trackID's and will only extract the top layer. Also, when I overlap tracks, the video doesn't show at all. So I came to the conclusion that I need a custom compositor. I implemented the compositor, but there are a few problems. I can't use layer instructions, but I know this can easily be solved by handling my transforms directly through my vertex shader. The biggest issue is that I need filters to be applied to each frame before the transition is done. For example, a transition between A and B: when I extract frames from track A for the transition, I need all of track A's filters and effects to be applied. When I extract frames from track B I need all of track B's filters to be applied. Then I need to render the transition with the filtered frames from A and B. I can do this in the compositor, but I won't be able to make live updates. I need live updates for my app, for example, changing the intensity of track A's filter with a slider should show every single increment updated live on the player. This solution doesn't allow for that since I would have to change the entire video composition to change the properties of the instructions and/or video compositor.
I've also looked into using an AVAssetReader, however, I am not sure if this will be fast enough or be able to handle seeking through videos efficiently.
So to recap, what I need is a way to extract frames from specific tracks that are overlapped and allow for live updates of any filters. If anyone can lead me in the right direction I'd appreciate it. Thank you.

iOS timecode-synced downloadable animation system

As an introduction and context, I'm currently a novice iOS app developer and I want to make sure I'm not reinventing the wheel too much as I make this app (reinventing wheels can get very expensive.)
The app will allow the user to download our videos off the internet and will allow storage for offline usage. The problem with storing these videos on the device is that many of them will be too long and thus too big to be practical to store.
The videos are quite simple however, consisting of a couple short "real" video clips at the beginning and end, with the bulk of the video being still images animated around the screen. The animations would consist solely of opacity and simple transformation keyframes (translate, scale, rotate around static anchor point), and would require a variety of easing functions for each transition.
The hardest part likely would be that the "video" player will also have to be able to track with an audio player's timecode, and will have to support seeking to any arbitrary point like a normal video player.
So, now that I've described the problem, here's the solution I've come up with so far. Hopefully doing it this way will reduce the probability of XY problems. :)
The idea is to basically do a dumbed-down version of what Final Cut and other editing programs do with animations—have a bunch of clips, sometimes overlapping, and be able to animate the position, scale, rotation, and opacity of each using keyframes.
My first instinct as far as implementation goes is to use some of iOS's game engine stuff to do animations (maybe SceneKit because it seems to allow animations to use scene time as opposed to real time, despite the fact that it's primarily 3d and I am doing 2d animations) and manually handle syncing time with the audio player, as well as manually handling the adding and removing of nodes from the scene when seeking through the video and when clips begin/end.
What are some built-in systems, plugins, etc. that I can take advantage of to make this easier and faster to develop and maintain? Double points if I don't have to transcode the animations by hand to some custom format.
As I mentioned in my comment your question is rather broad and contains multiple questions in one, I will address what you mentioned to be likely the hardest part:
https://developer.apple.com/documentation/avfoundation/avplayeritem
https://developer.apple.com/documentation/avfoundation/avasset
Instead of SceneKit, take a look at SpriteKit and its SKVideoNode.
Also, research Metal video processing. There are quit a few example projects available you could use as a starting point.

How to play video over SKScene

Basically I have an SKScene, and I want to play a video over the scene. The video is confetti falling with an alpha background. It will play when the player gets a high score. I am using an SKScene with shapes and images drawn with shape nodes and image nodes. I just was wondering if anyone could please tell me how to play the video over the screen and still see the game in the back, and be able to touch the buttons through the video. It is supposed to look like an animation playing.
I am using a video because I was just thinking that playing a video would be more processor efficient than having the game generate particles.
There is no built-in iOS solution. You can play 24BPP (fully opaque) movies under iOS, but the only built-in way to display alpha channel video would be to load a series of PNG images with alpha. Downside is that takes up a huge amount of memory and it bloats the app download. If you want to have a look at some working examples of this kind of functionality with a 3rd party app then see Alpha Channel Examples. You might also be interested in this blog post which shows example code of how to impl Alpha channel textures in OpenGL would could be implemented on top of SpriteKit too. The cube example shows rendering an alpha channel movie onto a cube, it was adapted from a Ray Wenderlich tutorial.
Here as an answer how to do that with GPUImageView. Also project on GitHub here and similar question from stackoverflow
The video stack doesn't yet support formats with alpha. For confetti, you should use SKEmitterNode. Size it to the area you envisioned for your video, and see Creating Particle Effects, i.e., its link to Add a particle emitter to your project and try out the "Snow" effect. It looks more like confetti when you give it a different color than white. Click the dot under "Color Ramp" to set the color.

Load video from iPhone library, modify frame and play it in real-time

I'm looking for a tips to develop an application for iPhone/iPad that will be able to process video (let's consider only local files stored on the device for simplicity) and play it in real-time. For example you can choose any movie and choose "Old movie" filter and want it like on old lamp TV.
In order to make this idea real i need to implement two key features:
1) Grab frames and audio stream from a movie file and get access to separate frames (I'm interested in raw pixel buffer in BGRA or at least YUV color space).
2) Display processed frames somehow. I know it's possible to render processed frame to OpenGL texture, but i would like to have more powerful component with playback controls. Is there any class of media player that supports playing custom image and audio buffers?
The processing function is done and it's fast (less than duration on one frame).
I'm not asking for ready solution, but any tips are welcome!
Answer
Frame grabbing.
It seems the only way to grab video and audio frames is to use AVAssetReader class. Although it's not recommended to use for real-time grabbing it does the job. In my tests on iPad2 grabbing single frame needs about 7-8 ms. Seeking across the video is a tricky. Maybe someone can point to more efficient solution?
Video playback. I've done this with custom view and GLES to render a rectangle texture with a video frame inside of it. As far as i know it's the fastest way to draw bitmaps.
Problems
Need to manually play a sound samples
AVAssetReader grabbing should be synchronized with a movie frame rate. Otherwise movie will go too fast or too slow.
AVAssetReader allows only continuous frame access. You can't seek forward and backward. Only proposed solution is to delete old reader and create a new with trimmed time range.
This is how you would load a video from the camera roll..
This is a way to start processing video. Brad Larson did a great job..
How to grab video frames..
You can use AVPlayer+ AVPlayerItem, it provide you a chance to apply a filter on the display image.

Resources