How to change CreationDate Resource Value - ios

I'm writing a recording app that enables the user to trim parts of previous recordings and concatenate them into one new recording.
My problem is: let's say I recorded an hour long track and I want to trim the first 2 minutes of that track. when I'll export these 2 minutes the creation date of this track will be "now", and I need it to match the date these 2 minutes actually took place.
So basically I'm trying to modify the tracks Url Resource Values, but I want to change only the creation date.
Is there a way to do this? or is there a way to add a new resource value key? or a way to attach the needed date to the url?
func trimStatringPoint(_ from: Date, startOffSet: TimeInterval, duration: TimeInterval, fileName: String, file: URL, completion: fileExportaionBlock?) {
if let asset = AVURLAsset(url: file) as AVAsset? {
var trimmedFileUrl = documentsDirectory().appendingPathComponent(fileName)
let exporter = AVAssetExportSession(asset: asset, presetName: AVAssetExportPresetAppleM4A)
exporter?.outputFileType = AVFileTypeAppleM4A
exporter?.outputURL = trimmedFileUrl
let start = CMTimeMake(Int64(startOffSet), 1)
let end = CMTimeMake(Int64(startOffSet + duration), 1)
exporter?.timeRange = CMTimeRangeFromTimeToTime(start, end)
exporter?.exportAsynchronously { handler in
if exporter?.status != AVAssetExportSessionStatus.completed {
print("Error while exporting \(exporter?.error?.localizedDescription ?? "unknown")")
completion?(nil)
return
}
}
//------------------------------------------------------
// this code needs to be replaced
do {
var resourceValus = URLResourceValues()
resourceValus.creationDate = from
try trimmedFileUrl.setResourceValues(resourceValus)
} catch {
deleteFile(atPath: trimmedFileUrl)
print("Error while setting date - \(error.localizedDescription)")
completion?(nil)
return
}
//------------------------------------------------------
completion?(trimmedFileUrl)
}

Have you tried mofifying metadata of the exported recording?
https://developer.apple.com/documentation/avfoundation/avmetadatacommonkeycreationdate
AVMutableMetadataItem *item = [AVMutableMetadataItem metadataItem];
metaItem.key = AVMetadataCommonKeyCreationDate;
metaItem.keySpace = AVMetadataKeySpaceCommon;
metaItem.value = [NSDate date];
NSArray *metadata = #{ metaItem };
AVAssetExportSession *exportSession = [AVAssetExportSession exportSessionWithAsset:composition presetName:AVAssetExportPresetMediumQuality];
exportSession.metadata = metadata;

Related

AudiKit AKPitchShifter & AKTimePitch Pitch Correction

I'm trying to get an autotune like sound from AKPitchShifter but the most I get is chipmunk type sound. I've played with different combinations with the AKTimePitch.pitch and the AKPitchShifter.shift both individually and together but everything comes out squeaky and too robotic.
I'm new to this library. Is there anything that I can add, such as other AudioKit classes, to get the sound close to autotune.
do {
let file = try AKAudioFile(readFileName: "someones-voice.wav")
let player = try AKAudioPlayer(file: file)
player.looping = true
let timePitch = AKTimePitch(player)
timePitch.pitch = 0.5
AKManager.output = timePitch
let pitchShifter = AKPitchShifter(player)
pitchShifter.shift = 1.5
AKManager.output = pitchShifter
try AKManager.start()
player.play()
} catch {
print(error.localizedDescription)
}
Resolved in git through a pull request addressing a few errors: https://github.com/lsamaria/AutoTuneSampler/pull/3

WebRTC(iOS): local video is not getting stream on remote side

I am trying to make an app with Audio, video call using WebRTC.
remote video and audio are working properly in my app, but my local stream is not appearing on the client side.
here is what I have written to add a video track
let videoSource = self.rtcPeerFactory.videoSource()
let videoCapturer = RTCCameraVideoCapturer(delegate: videoSource)
guard let frontCamera = (RTCCameraVideoCapturer.captureDevices().first { $0.position == .front }),
// choose highest res
let format = (RTCCameraVideoCapturer.supportedFormats(for: frontCamera).sorted { (f1, f2) -> Bool in
let width1 = CMVideoFormatDescriptionGetDimensions(f1.formatDescription).width
let width2 = CMVideoFormatDescriptionGetDimensions(f2.formatDescription).width
return width1 < width2
}).last,
// choose highest fps
let fps = (format.videoSupportedFrameRateRanges.sorted { return $0.maxFrameRate < $1.maxFrameRate }.last) else {
print(.error, "Error in createLocalVideoTrack")
return nil
}
videoCapturer.startCapture(with: frontCamera,
format: format,
fps: Int(fps.maxFrameRate))
self.callManagerDelegate?.didAddLocalVideoTrack(videoTrack: videoCapturer)
let videoTrack = self.rtcPeerFactory.videoTrack(with: videoSource, trackId: K.CONSTANT.VIDEO_TRACK_ID)
and this is to add Audio track
let constraints: RTCMediaConstraints = RTCMediaConstraints.init(mandatoryConstraints: [:], optionalConstraints: nil)
let audioSource: RTCAudioSource = self.rtcPeerFactory.audioSource(with: constraints)
let audioTrack: RTCAudioTrack = self.rtcPeerFactory.audioTrack(with: audioSource, trackId: K.CONSTANT.AUDIO_TRACK_ID)
my full webRTC log attached here.
some logs I am getting (I think this is something wrong)
(thread.cc:303): Waiting for the thread to join, but blocking calls have been disallowed
(basic_port_allocator.cc:1035): Port[31aba00:0:1:0:relay:Net[ipsec4:2405:204:8888:x:x:x:x:x/64:VPN/Unknown:id=2]]: Port encountered error while gathering candidates.
...
(basic_port_allocator.cc:1017): Port[38d7400:audio:1:0:local:Net[en0:192.168.1.x/24:Wifi:id=1]]: Port completed gathering candidates.
(basic_port_allocator.cc:1035): Port[3902c00:video:1:0:relay:Net[ipsec5:2405:204:8888:x:x:x:x:x/64:VPN/Unknown:id=3]]: Port encountered error while gathering candidates.
finally, find the solution
it was due to TCP protocol in the TURN server.

Convert video from mov to MP4 in iOS xamarin

How to convert mov to MP4 in xamarin iOS. I am using AVAssetExportSession. Last few seconds of video gets chipped off.. Any solution for this to retain full video duration.
Here is my code.
var basePath = Environment.GetFolderPath(Environment.SpecialFolder.ApplicationData);
var inputFilePath = Path.Combine(basePath, Path.ChangeExtension(_vm.Model.AnswerName, "mov"));
var outputFilePath = Path.Combine(basePath, _vm.Model.AnswerName);
var asset = AVAsset.FromUrl(NSUrl.FromFilename(inputFilePath));
AVAssetExportSession export = new AVAssetExportSession(asset, AVAssetExportSession.PresetHighestQuality);
export.OutputUrl = NSUrl.FromFilename(outputFilePath);
export.OutputFileType = AVFileType.Mpeg4;
export.ShouldOptimizeForNetworkUse = true;
try
{
export.ExportTaskAsync().Wait();
}
catch (Exception ex)
{
System.Diagnostics.Debug.WriteLine(ex);
}
var fileHelper = new FileHelper();
// If Export successful then delete the MOV file
if (fileHelper.FileExists(_vm.Model.AnswerName))
{
fileHelper.DeleteFile(Path.ChangeExtension(_vm.Model.AnswerName, "mov"));
}

Trimming video with Monotouch fails with "The operation could not be completed"

I am trying to trim a video to 5 seconds programmatically. Here is my implementation.
AVAssetExportSession exportSession= new AVAssetExportSession(videoAsset,AVAssetExportSession.PresetLowQuality.ToString());
int SystemVersion = Convert.ToInt16(UIDevice.CurrentDevice.SystemVersion.Split('.')[0]);
string filename;
if (SystemVersion >= 8)
{
var documents = NSFileManager.DefaultManager.GetUrls(NSSearchPathDirectory.DocumentDirectory, NSSearchPathDomain.User)[0].Path;
filename = Path.Combine(documents, "trimmed.mov");
}
else
{
var documents = Environment.GetFolderPath(Environment.SpecialFolder.MyDocuments); // iOS 7 and earlier
filename = Path.Combine(documents, "trimmed.mov");
}
outputUrl=new NSUrl(filename);
exportSession.OutputUrl = outputUrl;
CMTime start = new CMTime((long)1, 1);
CMTime duration = new CMTime((long)5, 1);
CMTimeRange range = new CMTimeRange();
range.Start=start;
range.Duration=duration;
exportSession.TimeRange = range;
exportSession.OutputFileType = AVFileType.QuickTimeMovie;
ExportTrimmedVideo( exportSession);
async void ExportTrimmedVideo(AVAssetExportSession exportSession)
{
await exportSession.ExportTaskAsync ();
if (exportSession.Status == AVAssetExportSessionStatus.Completed) {
InvokeOnMainThread (() => {
new UIAlertView ("Export Sucess", "Video is trimmed", null, "O K").Show ();
});
}
else
{
InvokeOnMainThread (() => {
new UIAlertView ("Export Falure", exportSession.Error.Description, null, "O K").Show ();
});
}
}
But in completion I am getting a Filed Status. Full NSError Description is as follows
Error Domain=AVFoundationErrorDomain Code=-11800 "The operation could not be completed" UserInfo=0x7cebcf80 {NSLocalizedDescription=The operation could not be completed, NSUnderlyingError=0x7cb08410 "The operation couldn’t be completed. (OSStatus error -12105.)", NSLocalizedFailureReason=An unknown error occurred (-12105)}
What am I possibly doing wrong ?
Edit
I have referred apple's documentation on Trimming video and have modified the above code with no positive effect as below.
var compatiblePresets= AVAssetExportSession.ExportPresetsCompatibleWithAsset(videoAsset).ToList();
var preset="";
if(compatiblePresets.Contains("AVAssetExportPresetLowQuality"))
{
preset="AVAssetExportPresetLowQuality";
}
else
{
preset=compatiblePresets.FirstOrDefault();
}
AVAssetExportSession exportSession= new AVAssetExportSession(videoAsset,preset);
int SystemVersion = Convert.ToInt16(UIDevice.CurrentDevice.SystemVersion.Split('.')[0]);
string filename;
if (SystemVersion >= 8)
{
var documents = NSFileManager.DefaultManager.GetUrls(NSSearchPathDirectory.DocumentDirectory, NSSearchPathDomain.User)[0].Path;
filename = Path.Combine(documents, "trimmed.mov");
}
else
{
var documents = Environment.GetFolderPath(Environment.SpecialFolder.MyDocuments); // iOS 7 and earlier
filename = Path.Combine(documents, "trimmed.mov");
}
outputUrl=new NSUrl(filename);
exportSession.OutputUrl = outputUrl;
exportSession.OutputFileType = AVFileType.QuickTimeMovie;
CMTime start = new CMTime((long)1, 600);
CMTime duration = new CMTime((long)5, 600);
CMTimeRange range = new CMTimeRange();
range.Start=start;
range.Duration=duration;
exportSession.TimeRange = range;
ExportTrimmedVideo( exportSession);
Try this code below. I modified exportSession.OutputUrl and how you initialize your CMTimeRange. Are you trimming it down to a 4 second clip?
var compatiblePresets= AVAssetExportSession.ExportPresetsCompatibleWithAsset(videoAsset).ToList();
var preset="";
if(compatiblePresets.Contains("AVAssetExportPresetLowQuality"))
{
preset="AVAssetExportPresetLowQuality";
}
else
{
preset=compatiblePresets.FirstOrDefault();
}
using (var exportSession = new AVAssetExportSession(videoAsset, preset))
{
int SystemVersion = Convert.ToInt16(UIDevice.CurrentDevice.SystemVersion.Split('.')[0]);
string filename;
if (SystemVersion >= 8)
{
var documents = NSFileManager.DefaultManager.GetUrls(NSSearchPathDirectory.DocumentDirectory, NSSearchPathDomain.User)[0].Path;
filename = Path.Combine(documents, "trimmed.mov");
}
else
{
var documents = Environment.GetFolderPath(Environment.SpecialFolder.MyDocuments); // iOS 7 and earlier
filename = Path.Combine(documents, "trimmed.mov");
}
exportSession.OutputUrl = NSUrl.FromFilename(filename);
exportSession.OutputFileType = AVFileType.QuickTimeMovie;
var range = new CMTimeRange();
range.Start = CMTime.FromSeconds (1, videoAsset.Duration.TimeScale);
range.Duration = CMTime.FromSeconds (5, videoAsset.Duration.TimeScale);
exportSession.TimeRange = range;
}
ExportTrimmedVideo( exportSession);

AVAssetExportSession - The video could not be composed

I am trying to do some basic Video Compositions in Xamarin / Monotouch and am having some success but am stuck what seems to be a rather simple task.
I record videos from the camera in portrait so I use AVAssetExportSession to rotate the videos. I have created a layer instructions to rotate the video which works fine. I am able to successfully export the video in the correct orientation.
The Issue:
When I add the audio track into the export I always get a failed response with this error:
Domain=AVFoundationErrorDomain Code=-11841 "Operation Stopped" UserInfo=0x1912c320 {NSLocalizedDescription=Operation Stopped, NSLocalizedFailureReason=The video could not be composed.}
If I dont set the videoComposition property on the exportSession the audio and video export perfectly fine just with the wrong orientation. If anyone could give mem some advice it would be greatly appreciated. Below is my code:
var composition = new AVMutableComposition();
var compositionTrackAudio = composition.AddMutableTrack(AVMediaType.Audio, 0);
var compositionTrackVideo = composition.AddMutableTrack(AVMediaType.Video, 0);
var videoCompositionInstructions = new AVVideoCompositionInstruction[files.Count];
var index = 0;
var renderSize = new SizeF(480, 480);
var _startTime = CMTime.Zero;
//AVUrlAsset asset;
var asset = new AVUrlAsset(new NSUrl(file, false), new AVUrlAssetOptions());
//var asset = AVAsset.FromUrl(new NSUrl(file, false));
//create an avassetrack with our asset
var videoTrack = asset.TracksWithMediaType(AVMediaType.Video)[0];
var audioTrack = asset.TracksWithMediaType(AVMediaType.Audio)[0];
//create a video composition and preset some settings
NSError error;
var assetTimeRange = new CMTimeRange { Start = CMTime.Zero, Duration = asset.Duration };
compositionTrackAudio.InsertTimeRange(new CMTimeRange
{
Start = CMTime.Zero,
Duration = asset.Duration,
}, audioTrack, _startTime, out error);
if (error != null) {
Debug.WriteLine (error.Description);
}
compositionTrackVideo.InsertTimeRange(assetTimeRange, videoTrack, _startTime, out error);
//create a video instruction
var transformer = new AVMutableVideoCompositionLayerInstruction
{
TrackID = videoTrack.TrackID,
};
var audioMix = new AVMutableAudioMix ();
var mixParameters = new AVMutableAudioMixInputParameters{
TrackID = audioTrack.TrackID
};
mixParameters.SetVolumeRamp (1.0f, 1.0f, new CMTimeRange {
Start = CMTime.Zero,
Duration = asset.Duration
});
audioMix.InputParameters = new [] { mixParameters };
var t1 = CGAffineTransform.MakeTranslation(videoTrack.NaturalSize.Height, 0);
//Make sure the square is portrait
var t2 = CGAffineTransform.Rotate(t1, (float)(Math.PI / 2f));
var finalTransform = t2;
transformer.SetTransform(finalTransform, CMTime.Zero);
//add the transformer layer instructions, then add to video composition
var instruction = new AVMutableVideoCompositionInstruction
{
TimeRange = assetTimeRange,
LayerInstructions = new []{ transformer }
};
videoCompositionInstructions[index] = instruction;
index++;
_startTime = CMTime.Add(_startTime, asset.Duration);
var videoComposition = new AVMutableVideoComposition();
videoComposition.FrameDuration = new CMTime(1 , (int)videoTrack.NominalFrameRate);
videoComposition.RenderScale = 1;
videoComposition.Instructions = videoCompositionInstructions;
videoComposition.RenderSize = renderSize;
var exportSession = new AVAssetExportSession(composition, AVAssetExportSession.PresetHighestQuality);
var filePath = _fileSystemManager.TempDirectory + DateTime.UtcNow.Ticks + ".mp4";
var outputLocation = new NSUrl(filePath, false);
exportSession.OutputUrl = outputLocation;
exportSession.OutputFileType = AVFileType.Mpeg4;
exportSession.VideoComposition = videoComposition;
exportSession.AudioMix = audioMix;
exportSession.ShouldOptimizeForNetworkUse = true;
exportSession.ExportAsynchronously(() =>
{
Debug.WriteLine(exportSession.Status);
switch (exportSession.Status)
{
case AVAssetExportSessionStatus.Failed:
{
Debug.WriteLine(exportSession.Error.Description);
Debug.WriteLine(exportSession.Error.DebugDescription);
break;
}
case AVAssetExportSessionStatus.Completed:
{
if (File.Exists(filePath))
{
_uploadService.AddVideoToVideoByteList(File.ReadAllBytes(filePath), ".mp4");
Task.Run(async () =>
{
await _uploadService.UploadVideo(_videoData);
});
}
break;
}
case AVAssetExportSessionStatus.Unknown:
{
break;
}
case AVAssetExportSessionStatus.Exporting:
{
break;
}
case AVAssetExportSessionStatus.Cancelled:
{
break;
}
}
});
So this was a really stupid mistake it was due to adding the audio track in before the video so the instructions must have been trying to apply the transform to the audio track rather than my video track.
My problem is that I forget to set the timeRange, it should be like this
let instruction = AVMutableVideoCompositionInstruction()
instruction.layerInstructions = [layer]
instruction.timeRange = CMTimeRange(start: kCMTimeZero, duration: videoDuration)
Note that AVMutableVideoCompositionInstruction.timeRange 's end time must be valid. It is different from AVAssetExportSession.timeRange
The time range to be exported from the source.
The default time range of an export session is kCMTimeZero to kCMTimePositiveInfinity, meaning that (modulo a possible limit on file length) the full duration of the asset will be exported.
You can observe this property using Key-value observing.

Resources