I'm using SceneKit with Metal (not openGL) & would like to allow a user to record a video of him playing the game. Any ideas how can I render the scene to a video? (There's no need to record the scene audio, which might make it more simple)
I thought I'd add it as an answer:
ReplayKit should do the job fine, though it does require iOS9 and a device that supports Metal (A7 or later). I've never used it but from what I remember of WWDC 2015 it only required a few lines of code to set up. There's tons of tutorials on it available on the net.
This one seems to include most bits such as starting and stopping recording, as well as excluding interface objects from the video if required.
Related
I am recording videos and playing them back using AVFoundation. Everything is perfect except the hissing which is there in the whole video. You can hear this hissing in every video captured from any iPad. Even videos captured from Apple's inbuilt camera app has it.
To hear it clearly, you can record a video in a place as quiet as possible without speaking anything. It can be very easily detected through headphones and keeping volume to maximum.
After researching, I found out that this hissing is made by preamplifier of the device and cannot be avoided while recording.
Only possible solution is to remove it during post processing of audio. Low frequency noise can be removed by implementing low pass filter and noise gates. There are applications and software like Adobe Audition which can perform this operation. This video shows how it is achieved using Adobe Audition.
I have searched Apple docs and found nothing which can achieve this directly. So I want to know if there exists any library, api or open source project which can perform this operation. If not, then how can I start going in right direction because it does looks like a complex task.
I'm a Unity dev and need to help out colleagues with doing this natively in Obj-C. In Unity it's no big deal :
1)samples are stored in memory as a List of float[]
2)A helper function returns float[] of n size for any given sample, at any given offset
3)Another helper function fades the data if needed
4)An AudioClip object is created with the right size to accomodate all cut samples, and is then filled at appropriate offsets.
5)The AudioClip is assigned to a player component(AudioSource).
6)AudioSource.Play(ulong offsetInSamples), plays at a sample accurate time in the future. Looping is also just a matter of setting the AudioSource object's loop parameter.
I would very much appreciate if someone could point me towards the right classes to achieve similar results in Obj-C, for iOS devices. I'm pretty sure a lot of iOS audio newbies would be intersted too. Many thanks in advance!
Gregzo
A good overview of the relevant audio APIs available in iOs is here
The highest level framework that makes sense for patching together audio clips, setting their volume levels, and playing them back in your case is probably AVFoundation.
It will involve creating AVAssets, adding them to AVPlayerItems, possibly putting them into AVMutableCompositions to merge multiple items together and adjust their volumes (audioMix), and them playing them back with AVPlayer.
AVFoundation works with AVAsset, for converting between relevant formats and lower level bytes you'll want to have a look at AudioToolbox (I can't post more than two links yet).
For an somewhat simpler API with less control have a look at AVAudioPlayer. If you need greater control (eg: games - real time / low latency) you might need to use OpenAL for playback.
Although i've searched SO and read documentation multiple times on AVCaptureConnection, AVCaptureSession, AVCaptureVideoPreviewLayer, AVCaptureDevice, AVCaptureInput/Output … i'm still confused about all this AV stuff. When it comes to this, it's one big pile of abstract words to me, that don't make much sense. I'm asking to shed some light on the subject for me here.
So, can anyone explain coherently in plain english the logic of proper setup and use of the media devices? What is AVCaptureVideoPreviewLayer? What is AVCaptureConnection? Input/Output?
I want to catch the basic idea the people who made this stuff had while making it.
Thanks
I wish I had more time to write a more thorough reply. Here are some simplified basics:
In order to work with audio and video coming from the hardware, destined for the screen or files, you need to setup an AVCaptureSession that helps coordinate the sources and the destinations, using AVCaptureConnections. You use the session instance to start and stop the process, along with setting some output properties like bitrate and quality. You use the AVCaptureConnection instance(s) to control the connection between an AVCaptureInputPort and an AVCaptureOutputPort (or AVCaptureVideoPreviewLayer), such as monitoring input levels of sounds or setting the orientation of the video.
AVCaptureInputPort are different inputs from AVCaptureDevice - which is where your video or audio is coming from, such as the camera or the microphone. You will normally look through all available devices and choose those that have the properties you are looking for, such as if they are audio, or if they are the front-facing camera.
AVCaptureOutput is where the AV is sent - it might be a file or a routine that allows you to process the data in real-time, etc.
AVCaptureVideoPreviewLayer is an OpenGL layer that is optimized for very fast rendering of the output of the selected video input device (front or back camera). You typically use this to show your user what input you are working with - sort of like a camera viewfinder.
If you are going to use this stuff, then you must read Apple's AV Foundation Programming Guide
Here's an image that may help you some more (from above-mentioned doc):
A more detailed view:
I am interested in a way to play sounds from specific points in space relative to the user.
Basically I would like to say the user is at point (0,0) and a sound came from (10,10) and then take a sound and send it through some library that plays it, sounding as though it came from the source (10,10). Performance in doing this would be very important.
If it wasn't painfully obvious from reading the question, I have very little experience with audio on any device.
After doing a little research, it seems the options are to use the OpenAL framework which is supported by apple, or essentially roll your own on top of Audio Unit.
There is a 3D Mixer Audio Unit that apple provides, which requires you to develop a lot of understanding of Audio Units.
Then there is OpenAL which is a cross platform audio framework where you can position a "source" and a "listener" and it will compute attenuation and stereo for you.
Both require low level understanding of playing Audio and are not very fun. So I figured I might as well jump all the way in the water and learn about the Audio Units, since I may want to do some more specialized stuff in the future.
This is an easy wrapper for the iOS OpenAL functionality: ObjectAL-for-iPhone
Play around with the example and see if it does what you want.
I would like to create a game for XBox360 which is mostly full-screen HD videos. The player will be given choices during the game to determine which video is to be played.
I need very fine-grained control over the video such as controlling playback speed, seeking to video frames and possibly applying simple effects to the videos.
I also want to be able to use augmented reality to add elements to the videos so I need to be able to render 3d objects over the video.
It would be great if this could be done in XNA however there is only basic video playback functionality there. What other options do I have?
Your options for decoding videos are limited. The VideoPlayer class provides functionality for playing videos from the start, pausing and resuming them, looping them, and setting their audio volume.
As far as displaying videos goes - you have a huge degree of freedom. You basically get each frame of the video as a texture that you can draw as a sprite, or apply to any 3D object. This includes using it as an input to a pixel shader, allowing you to apply all kinds of effects to the video.
The only alternative to the built-in player is to create your own. If you want to target the Xbox 360 this will limit you to managed code only. I am not aware of any suitable video decoder libraries.
For Windows, a little Googling revealed this library, which may be a good starting point.