I'm currently trying to use UIVideoEditorViewController to trim a video file previously selected from the Photos app by using UIImagePickerController. I verified that after choosing the file and before creating UIVideoEditorViewController the file has the original resolution. However, after trimming the video, the output is always in 360p resolution despite setting the videoQuality property to high. All the applicable settings seem to be ignored for this property and the video ends up being compressed unnecessarily.
I have found multiple people reporting similar issues but yet have to find a workaround for this.
let editor = UIVideoEditorController()
editor.delegate = self
editor.videoMaximumDuration = 0 // No limit for now
// TODO: This should really work, however while testing it seems like the imported videos are always scaled down
// to 320p which equivalents the default value of .low.
editor.videoQuality = .typeHigh
editor.videoPath = internalURL.path
Any help would be greatly appreciated.
Related
I'm trying to assign a table view to an already existing UIView object on my storyboard but every time I assign it by specifying the correct frame I get an issue.
let sampleTable = AKTableView(AKTable(file: sampleReferences[indexPath.row].sampleFile!), frame: samplePlot.bounds)
samplePlot.addSubview(sampleTable)
This is the code I'm using at the moment. I've also used samplePlot.frame.
Basically the sampleReferences[indexPath.row].sampleFile! is an AKAudioFile stored earlier from a recording stored as a reference from a UITableView element. It's quite a large program split into multiple files so it's hard to show everything.
The only extra issue I can think of is I haven't stopped any of the current AudioKit processes. I'm not sure if the table can only be drawn at startup.
The following works with a .sine specifier AKTable(.sine) so it's not an issue with my UI code. It also fails when trying to load even a basic .mp3 file from .resources.
Update:
I've compressed the test .mp3 file significatly and shortened it's overall length to 2 seconds and I'm still getting the same issue. I think it might be an issue with the actual function used to draw the file. Although I could be wrong.
Another update:
So everything works fine using the following code
let file = EZAudioFile(url: sampleAudio!.url)
guard let data = file?.getWaveformData() else {
print("Failed to get waveform data")
return }
let waveform = EZAudioPlot()
waveform.frame = samplePlot.bounds
waveform.plotType = EZPlotType.buffer
waveform.shouldFill = true
waveform.shouldMirror = true
waveform.color = UIColor.black
waveform.updateBuffer(data.buffers[0], withBufferSize: data.bufferSize )
samplePlot.addSubview(waveform)
but obviously I'm using EZAudioPlot. I grabbed this from a previous stack overflow issue for the same thing and edited it a bit.
I'm working on a video app, we are changing form regular mp4 files to HLS, one of the many reasons we have to do the change is that we hace much more control over the bandwidth usage of videos (we load lots of other stuff in our player, so we need to optimize the experience the best way).
So, AVFoundation introduced in iOS10 the ability to control the bandwidth using:
AVPlayerItem *playerItem = [AVPlayerItem playerItemWithAsset:self.urlAsset];
playerItem.preferredForwardBufferDuration = 30.0;
playerItem.preferredPeakBitRate = 200000.0; // Remember this line
There's also a configuration introduced on iOS11 to set the maximum resolution of the item with preferredMaximumResolution, So we're using it, but we still need a solution for iOS10 devices.
Well, now we have control over the preferredPeakBitRate that's nice, but we have a problem, not all the HLS sources are generated by us, so, let's say we want to set a maximum resolution of 480p when you're not connected to a wifi network, today I don't have way to achieve that, not always I'm going to be able to know how much bandwidth needs the 480p source for the selected HLS playlist.
One thing I was thinking about is to read the information inside the m3u8 file, to at least know which are the different quality sources that my player can show and how much bandwidth needs everyone.
One way to do this, would download the m3u8 playlist as a plain text, use a regex to read the file and process this data, well, I'm trying to avoid that, I think that this should far less difficult.
I cannot read this information from the tracks, because a) I can't find the information, b) the tracks are replaced dynamically when changing the quality, yeah 1 track for every quality level.
So, I don't know how I can get this information, I've searched google, stackoverflow and I can't find this information, does any one can help me?
Here's an example for what I want to do, I have this example playlist:
#EXTM3U
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=314000,RESOLUTION=228x128,CODECS="mp4a.40.2"
test-hls-1-16a709300abeb08713a5cada91ab864e_hls_duplex_192k.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=478000,RESOLUTION=400x224,CODECS="avc1.42001e,mp4a.40.2"
test-hls-1-16a709300abeb08713a5cada91ab864e_hls_duplex_400k.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=691000,RESOLUTION=480x270,CODECS="avc1.42001e,mp4a.40.2"
test-hls-1-16a709300abeb08713a5cada91ab864e_hls_duplex_600k.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=1120000,RESOLUTION=640x360,CODECS="avc1.4d001f,mp4a.40.2"
test-hls-1-16a709300abeb08713a5cada91ab864e_hls_duplex_1000k.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=1661000,RESOLUTION=960x540,CODECS="avc1.4d001f,mp4a.40.2"
test-hls-1-16a709300abeb08713a5cada91ab864e_hls_duplex_1500k.m3u8
And I just want to have that information available on an array inside my code, something like this:
NSArray<ZZMetadata *> *metadataArray = self.urlAsset.bandwidthMetadata;
NSLog(#"Metadata info: %#", metadataArray);
And print something like this:
<__NSArrayM 0x123456789> (
<ZZMetadata 0x234567890> {
trackId: 1
neededBandwidth: 314000
resolution: 228x128
codecs: ...
...
}
<ZZMetadata 0x345678901> {
trackId: 2
neededBandwidth: 478000
resolution: 400x224
}
...
}
I have a question.
Recently I needed to add custom tags for recorded video. Local video on device not a streamed video. The task is to add some event specific tags in video, position of which could be set by pressing forward/backward like buttons like in any player.
It is not important whether the movie file will be mov file or mp4 format.
I searched on forum, found several samples how to add metadata using AVExportSession & it worked.
Although, when I tried to add metadata using AVAssetWriter. I wasn't able to append attributes to video.
What I do not understand is that after adding attribute, returned (time & duration) properties are always invalid.
For instance let's say I have a video with duration 2 seconds.
I have tried different key spaces. I am not able to write keys' from ID3 space.
IS ID3 used for stream video? (as far as I understood ID3 metadata of .mp3). Therefore, I was not able to write it into MPEG-4 file
I also used QuickTimeUserData & ISOUserData but again results are the same.
Here is an example
AVMutableMetadataItem *item2 = [AVMutableMetadataItem new];
item2.keySpace = AVMetadataKeySpaceiTunes;
item2.key = AVMetadataiTunesMetadataKeyUserComment;
item2.value = #"One two three";
item2.duration =CMTimeMakeWithSeconds(1, 1);
item2.time = CMTimeMakeWithSeconds(0, 1);
After reading I got the following:
AVMutableMetadataItem: 0xa4301f0, keySpace=itsk, key=\U00a9cmt, commonKey=(null), locale= (null), value=One two three, time={INVALID}, duration={INVALID}, extras={\n dataType = 1;\n}
I would like to use time & duration properties for metadata instead of writing custom data and processing it after that.
Ideally it would be great to append array of items with time = t1, duration = d1, .... (tn,dn).
Does anyone know how to accomplish that?
I've ended with a solution adding chapters to a video file instead of using metadata.
I looked at available libraries, took mpv4lib.
The library currently is not compiled for iOS, therefore, I ported the source project into static library for iOS platform.
That library allows to add custom "atoms" to mp4 file, and one of them is Quick Time text track, containing chapters.
I do similar with that post
The library is located here.
This query is regarding the Portaudio framework. A little background before I ask the question:I am working on an application in PortAudio to output audio through a multichannel(=8) device. However, the device I am using does not expose itself as a single 8-channel device but instead shows up in my device-list as 4 stereo devices. On searching for an approach to handle this, I got to know that WinMME in PortAudio supports multiple devices.
Now, I went through the appropriate header file("pa_win_wmme.h") and followed the suggestions present. But I get the 'Invalid device' error (error number -9996) after calling PA_OpenStream(). In the above mentioned header file, they have in fact specified the right parameter(s) to use when configuring multiple devices to avoid this error, but in-spite of following them, I still get the error.
So I wanted to know if anybody has faced a similar issue and whether I have missed/wrongly configured anything.
I am pasting the required snippets of code below for reference:
PaStreamParameters outputParameters;
PaWinMmeStreamInfo wmmeStreamInfo;
PaWinMmeDeviceAndChannelCount wmmeDeviceAndNumChannels;**
...
...
outputParameters.device = paUseHostApiSpecificDeviceSpecification;
outputParameters.channelCount = 8;
outputParameters.sampleFormat = paFloat32; /* 32 bit floating point processing */
outputParameters.hostApiSpecificStreamInfo = NULL;
wmmeStreamInfo.size = sizeof(PaWinMmeStreamInfo);
wmmeStreamInfo.hostApiType = paMME;
wmmeStreamInfo.version = 1;
wmmeStreamInfo.flags = paWinMmeUseMultipleDevices;
wmmeDeviceAndNumChannels.channelCount = 2;
wmmeDeviceAndNumChannels.device = 3;
wmmeStreamInfo.devices = &wmmeDeviceAndNumChannels;
wmmeStreamInfo.deviceCount = 4;
outputParameters.hostApiSpecificStreamInfo = &wmmeStreamInfo;
The device id = 3 was obtained through
Pa_GetHostApiInfo( Pa_HostApiTypeIdToHostApiIndex( paMME ) )->defaultOutputDevice
I hope I have made the query clear enough. Will be happy to provide more details if required.
Thanks.
I finally figured out the mistake :-)
The configuration for multiple devices must be made as an array. For instance, in the above case
wmmeDeviceAndNumChannels must be an array of 4, with each individual device field containing the corresponding device index of each of the 4 stereo devices. The channelCount remains 2. The outputParameters.channelCount still has to be the aggregate number of channels, i.e. 8. With this I was able to run the application with a single stream, and of course, without any errors related to invalid device or invalid number of channels.:-)
Thanks.
Based on the code pasted above, it looks like you are trying to call open on a single 8-channel device. Instead you will have to get the Pa index of all four devices and call open 4 times. Once for each stereo device. You will then have 4 interleaved stereo streams to maintain. My guess is that changing channelCount = 8 to channelCount = 2 will allow the first stream to open.
My understanding currently is that:
CameraUI
I can use the CameraUI to access the built in camera for MediaType.VIDEO and that delegates to the built-in video camera app and lets me record a video. My app does that now.
When I stop recording and click the "Use" button, I am returned to my app and theoretically I have a valid MediaPromise.
iOS does -not- provide a valid/usable url/filename to the recorded video (or to photos) and so I would have to use a Loader to bring-in/use/access the 'recorded' video... AND... iOS does not actually create a file anywhere on the device, most importantly, in the Camera Roll where one would expect by the normal behavior when uses the system native camera/video app.
The documentation says that the Loader can load various image types and SWFs but nothing about video data, so I conclude from that that I cannot actually use the CameraUI to generate a valid MediaPromise that I can then pass to a Loader class or similar to read in the information created by the system camera and then manipulate (upload, save to applicationStorageDirectory, and/or display in one of the two video player components available in the API).
CameraRoll
I can have video entities in the iOS Camera Roll but the AS3/Air3.5 CameraRoll class won't let me view/access/reference them in any way.
Normal File I/O
All my attempts to use the Air3.5 File classes to browse to the storage location of the iOS Camera Roll have been rebuffed.
------- Questions -------
Am I correct in believing that there is a way to take video but no way to use the video that's been captured. (No way to use the resulting MediaPromise successfully).
I believe you can take video and access it using Android, but there's nothing in the documentation that says that you cannot using iOS.
Am I correct in believing that iOS sandboxes apps so that they cannot browse to video/photo storage using standard File I/O, but only through the apparently non-workable means I've tried (CameraUI & CameraRoll)
Am I wrong to think that these should be rather obvious NEEDS that one can achieve using the XCode Objective C++ etc route but the AIR Mobile Framework does not allow either because of Apple blocking functionality or because Adobe has failed to meet reasonable expectations?
One item of ironic note to convey. If I use the iOS system camera app to record a video, a thumnail of that video then appears in the Gallery/Camera Roll, and of course, I can share it or view it, or whatever... If I use AIR's CameraRoll.browseForImage(), provided I haven't used the camera to take another image, when it shows me the folder where the pictures are stored, the folder icon uses the thumbnail of the last object added... in this case, the video I took, but if I then enter the folder, the video cannot be found. It's teasing us. It knows it's there, but it is apparently forbidden fruit.
I can't answer all your questions, so this entry may not be acceptable, but I found this page while searching a solution for some the problems you described and thought that someone else may find this answer (partially) useful.
To save the movie you just took you need to open and read the data from the promise.
The iOS won't save the file anywere, so the MediaPromise.file is always null.
This is my solution to the problem:
private var camera:CameraUI;
private var dataInput:IDataInput;
public function recordVideo():void
{
// Start the camera and ask for a video
camera = new CameraUI();
camera.addEventListener(MediaEvent.COMPLETE, onCameraComplete);
camera.launch(MediaType.VIDEO);
}
private function onCameraComplete(event:MediaEvent):void
{
// event.data is a MediaPromise and MediaPromise.open() returns a IDataInput
// Let's cast it to a dispatcher and check when it's complete
dataInput = event.data.open();
var dispatcher:IEventDispatcher = IEventDispatcher(dataInput);
dispatcher.addEventListener(Event.COMPLETE, onDataInputComplete);
}
private function onDataInputComplete(event:Event):void
{
// We can do whatever we want with the data, so we'll store it in a File
var file:File = new File();
var bytes:ByteArray = new ByteArray();
var stream:FileStream = new FileStream();
// Reading the data from the opened MediaPromise
dataInput.readBytes(bytes);
stream.open(file, FileMode.WRITE);
stream.writeBytes(bytes, 0, bytes.bytesAvailable);
stream.close();
}
Also, I'm still looking for a way to put the movie in the CameraRoll