cannot use 'TVICameraCapturer' - ios

I'm migrating my app into iOS 13 , swift 5.0.
The main feature in my app is video calling which was working great.
I was using TVICameraCapturer in order to add the video track.
using the below methods:
var camera : TVICameraCapturer!
guard let camera = TVICameraCapturer(source: .backCameraWide), let videoTrack = LocalVideoTrack(capturer: camera) else {
return
}
any way after migrating and fixing all errors, there remains 1 thing, that I couldn't find
'TVICameraCapturer' please can anyone help if it's obsoleted and give any alternative.

It is been changed in recent versions of TwilioVideo CameraSource is the type you are looking for.

Related

iOS: Apply audio modifications to Music library content

I'm working on an iOS/Flutter application, and am trying to work out if it's possible to play audio from the Music library on iOS with audio modifications (e. g. equalization settings) applied.
It seems like I'm looking for a solution that can work with MPMusicPlayerController, since that appears to be the strategy for playing local audio from the user's iOS Music library. I can find examples of applying EQ to audio on iOS (e. g. using AVAudioUnitEQ and AVAudioEngine: SO link, tutorial), but I'm unable to find any resources to help me understand if it's possible to bridge the gap between these resources.
Flutter specific context:
There are Flutter plugins that provide some of the functionality I'm looking for, but don't appear to work together. For example, the just_audio plugin has a robust set of features for modifying audio, but does not work with the local Music application on iOS/MPMusicPlayerController. Other plugins that do work with MPMusicPlayerController, like playify, do not have the ability to modify/transform the audio.
Even though I'm working with Flutter, any general advice on the iOS side would be very helpful. I appreciate any insight someone with more knowledge may be able to share with me!
Updating with my own answer here for future people: It looks like my only path forward (for now) is leaning into into AVAudioEngine directly. This is the rough POC that worked for me:
var audioPlayer = AVAudioPlayerNode()
var audioEngine = AVAudioEngine()
var eq = AVAudioUnitEQ()
let mediaItemCollection: [MPMediaItem] = MPMediaQuery.songs().items!
let song = mediaItemCollection[0]
do {
let file = try AVAudioFile(forReading: song.assetURL!)
audioEngine.attach(audioPlayer)
audioEngine.attach(eq)
audioEngine.connect(audioPlayer, to: eq, format: nil)
audioEngine.connect(eq, to: audioEngine.outputNode, format: file.processingFormat)
audioPlayer.scheduleFile(file, at: nil)
try audioEngine.start()
audioPlayer.play()
} catch {
// catch
}
The trickiest part for me was working out how to bridge together the "Music library/MPMediaItem" world to "AVAudioEngine" world -- which was just AVAudioFile(forReading: song.assetURL!)

Inaccurate face detection using ML Kit Face detection, doesn't work with selfies

I am creating a iOS app that uses the Firebase ML Kit Face Detection and I am trying to allow users to take a photo from their camera and check if there was a face in it. So I have followed the documentation and some youtube videos but it seems that it just doesn't work properly/accurately for me. I did some testing using a photo library not just pictures that I take, and what I found is it works well when I use selfies from google, but when I take my own selfies it never seems to work. I noticed when I take a selfie on my camera it does like a "mirror" kind of thing where it flips it, but I even took a picture of my friend using the front facing camera and it still didn't work. So I am not sure if I implemented this wrong, or what is going on. I have attached some of the relevant code to show how it was implemented. Thanks to anyone who takes the time to help out, I am a novice at iOS development so hopefully this isn't a waste of your time.
func photoVerification(){
let options = VisionFaceDetectorOptions()
let vision = Vision.vision()
let faceDetector = vision.faceDetector(options: options)
let image = VisionImage(image: image_one.image!)
faceDetector.process(image) { (faces, error) in
guard error == nil, let faces = faces, !faces.isEmpty else{
//No face detected provide error on image
print("No face detected!")
self.markImage(isVerified: false)
return
}
//Face Has been detected Offer Verified Tag to user
print("Face detected!")
self.markImage(isVerified: true)
}
}

AudioKit export song pre iOS 11

Note that this is NOT a duplicate of this SO Post because in that post only WHAT method to use is given but there's no example on HOW should I use it.
So, I have dug into AKOfflineRenderNode as much as I can and have viewed all examples I could find. However, my code never seemed to work correctly on iOS 10.3.1 devices(and other iOS 10 versions), for the result is always silent. I try to follow examples provided in other SO posts but no success. I try to follow that in SongProcessor but it uses an older version of Swift and I can't even compile it. Trying SongProcessor's way to use AKOfflineRenderNode didn't help either. It always turned out silent.
I created a demo project just to test this. Because I don't own the audio file I used to test with, I couldn't upload it to my GitHub. Please add an audio file named "Test" into the project before compiling onto an iOS 10.3.1 simulator. (And if your file isn't in m4a, remember to change the file type in code where I initialize AKPlayer)
If you don't want to download and run the sample, the essential part is here:
#IBAction func export() {
// url, player, offlineRenderer and others are predefined and connected as player >> aPitchShifter >> offlineRenderer
// AudioKit.output is already offlineRenderer
offlineRenderer.internalRenderEnabled = false
try! AudioKit.start()
// I also tried using AKAudioPlayer instead of AKPlayer
// Also tried getting time in these ways:
// AVAudioTime.secondsToAudioTime(hostTime: 0, time: 0)
// player.audioTime(at: 0)
// And for hostTime I've tried 0 as well as mach_absolute_time()
// None worked
let time = AVAudioTime(sampleTime: 0, atRate: offlineRenderer.avAudioNode.inputFormat(forBus: 0).sampleRate)
player.play(at: time)
try! offlineRenderer.renderToURL(url, duration: player.duration)
player.stop()
player.disconnectOutput()
offlineRenderer.internalRenderEnabled = true
try? AudioKit.stop()
}

ML kit face recognition not working on IOS

I'm working on an app that does facial recognition. One of the steps include detecting the user smile. For that, I am currently using google's Ml Kit. The application works fine on Android platform but when I run on Ios (Iphone Xr and others) it does not recognize any faces on any image. I have already followed every steps on how to integrate Ios and Firebase and it runs fine.
Here's my code. It's always falling on length == 0, as the image would not contain any faces. The image passed as parameter is coming from the image_picker plugin.
Future<Face> verifyFace(File thisImage) async {
var beforeTime = new DateTime.now();
final image = FirebaseVisionImage.fromFile(thisImage);
final faceDetector = FirebaseVision.instance.faceDetector(
FaceDetectorOptions(
mode: FaceDetectorMode.accurate,
enableClassification: true,
),
);
var processedImages = await faceDetector.processImage(image);
print('Processing time: ' +
DateTime.now().difference(beforeTime).inMilliseconds.toString());
if (processedImages.length == 0) {
throw new NoFacesDetectedException();
} else if (processedImages.length == 1) {
Face face = processedImages.first;
if(face.smilingProbability == null){
throw new LipsNotFoundException();
}
else {
return face;
}
} else if (processedImages.length > 1) {
throw new TooManyFacesDetectedException();
}
}
If someone has any tips or can tell what I am doing wrong I would be very grateful.
I know this is an old issue, but I was having the same problem and turns out I just forgot to add the pod 'Firebase/MLVisionFaceModel' in the podfile.
there is configuration in some many places so i will better left you this video (although maybe you already see it) so you can see some code and how Matt Sullivan make that one you are trying to do.
let met know if you already see it and please add maybe an example repo i could work with so see you exact code.
From what I can tell, ML Kit face detection does work on iOS but very poorly.
It doesn't even seem worth it to use the SDK.
The docs do say that the face itself must be at least 100x100px. In my testing though the face itself needs to be at least 700px for the SDK to detect the face.
The SDK on Android works super well even on small image sizes (200x200px in total).

How to custom WebRTC video source?

Does someone know how to change WebRTC (https://cocoapods.org/pods/libjingle_peerconnection) video source?
I am working on an screen sharing app.
At the moment, I retrieve the rendered frames in real-time in CVPixelBuffer. Does someone know how I could add my frames as video source please?
Is it possible to set an other video source instead of camera device source ? Is yes, which format the video has to be and how to do it ?
Thanks.
var connectionFactory : RTCPeerConnectionFactory = RTCPeerConnectionFactory()
let videoSource : RTCVideoSource = factory.videoSource()
videoSource.capturer(videoCapturer, didCapture: videoFrame!)
Mounis answer is wrong. This leads to nothing. At least not at the time of this writing. There is simply nothing happening.
In fact, you would need to satisfy this delegate
- (void)capturer:(RTCVideoCapturer *)capturer didCaptureVideoFrame:(RTCVideoFrame *)frame;
(Note the difference to the Swift version: didCapture vs. didCaptureVideoFrame)
Since this delegate is for unclear reasons not available at Swift level (the compiler says you have to use didCapture, since it has been renamed from didCaptureVideoFrame with Swift3) you have to put the code int an ObjC class. I did copy this and this (which is a part of this sample project)into my project, made my videoCapturer an instance of ARDBroadcastSampleHandler
self.videoCapturer = ARDExternalSampleCapturer(delegate: videoSource)
and within the capture callback I'm calling it
let capturer = self.videoCapturer as? ARDExternalSampleCapturer
capturer?.didCapture(sampleBuffer)

Resources