Is it possible to access system classes in swift? - ios

I was trying to access the player inside WKWebView but I did some coding and it turned out it doesn't use AVPlayerViewController it uses a system class called WebFullScreenVideoRootVideoController
I used the code like this
the function in the picture is fired after a UIWindow appears
After that I started digging more and search for notifications fired by WebFullScreenVideoRootVideoController and some class called AVSystemController or something like that... it turned out it has multiple notifications two of them logically do what I want:
NowPlayingAppIsPlayingDidChange // first one
SomeClientPlayingDidChange // second one
But also the object that they return is called FigBaseObject
Is there anyway to access these objects "some hacky way :P" ?

This link should help you to find when to work on particular notification.
http://paulofierro.com/blog/2015/10/12/listening-for-video-playback-within-a-wkwebview
inside the notification, you can find the url
Extract video URL from NSNotification Asset
if let asset = notification.object?.asset as? AVURLAsset {
let videoURL = asset.URL
}
Did test for XAMAarin IOS, notification.object?.asset is not available, but not sure about swift.
Thanks

Related

How can I save the last paused time in an AVPlayer, and seek to it in another storyboard?

I'm new to xCode/swift so please forgive my inexperience. I have a ViewController where my viewers can listen to some audio playback. The playback is accessed like this when the player clicks a play button:
#IBAction func buttonClicked(_ sender: RoundButton)
{
self.clickedButton = sender
guard let url = sender.url else {
return
}
let player = AVPlayer(url: url)
let controller = AVPlayerViewController()
controller.player = player
present(controller, animated: true) {
player.play()
}
}
I got this code from another StackOverflow question, so I don't completely understand it. My goal is to be able to save the URL and the last played time so that the user can minimize the app, or navigate to a different screen, and then be able to click a "continue listening" button which will pull up another AVPlayer with the last used URL. This "continue listening" AVPlayer will then seek to the last played time.
I know that I need to observe the first AVPlayer somehow, so that when it is paused, stopped, or put in the background, I save the currentTime to a NSUserDefault (I think?). I also need to save the URL, because there are many different URLs that the user could click on.
I tried doing this, and besides not being able to figure out the observation, I also couldn't figure out the type inconsistencies present with NSUserDefault. I tried to retrieve URL NSDefault value as a String after setting it, but when I went to cast the String to a URL using URL(string: lastPlayedURL), xCode complained about "Cannot convert type Data? to expected type String".
My issue with using other StackOverflow questions to solve my problem is that I don't understand where to put the code blocks. Where do I create the observer? Because xCode did not seem happy when I created it inside the body of "buttonClicked".
Thank you for listening to my rambling.
Yes, UserDefaults seem appropriate to store the URL in. Use this method to do that.
To observe the player you need to use an AVPlayerItem. Here's some code that shows that.
About your general issues with Xcode (note the capitalisation) and Swift language, I'm afraid these are things you need to work through yourself by reading/watching tutorials/documentation. Then when you find detailed issues, post your code here and ask.
Good luck and have fun!

How to custom WebRTC video source?

Does someone know how to change WebRTC (https://cocoapods.org/pods/libjingle_peerconnection) video source?
I am working on an screen sharing app.
At the moment, I retrieve the rendered frames in real-time in CVPixelBuffer. Does someone know how I could add my frames as video source please?
Is it possible to set an other video source instead of camera device source ? Is yes, which format the video has to be and how to do it ?
Thanks.
var connectionFactory : RTCPeerConnectionFactory = RTCPeerConnectionFactory()
let videoSource : RTCVideoSource = factory.videoSource()
videoSource.capturer(videoCapturer, didCapture: videoFrame!)
Mounis answer is wrong. This leads to nothing. At least not at the time of this writing. There is simply nothing happening.
In fact, you would need to satisfy this delegate
- (void)capturer:(RTCVideoCapturer *)capturer didCaptureVideoFrame:(RTCVideoFrame *)frame;
(Note the difference to the Swift version: didCapture vs. didCaptureVideoFrame)
Since this delegate is for unclear reasons not available at Swift level (the compiler says you have to use didCapture, since it has been renamed from didCaptureVideoFrame with Swift3) you have to put the code int an ObjC class. I did copy this and this (which is a part of this sample project)into my project, made my videoCapturer an instance of ARDBroadcastSampleHandler
self.videoCapturer = ARDExternalSampleCapturer(delegate: videoSource)
and within the capture callback I'm calling it
let capturer = self.videoCapturer as? ARDExternalSampleCapturer
capturer?.didCapture(sampleBuffer)

Amazon polly not playing multiple text inputs

I have integrated amazon polly to one of my project in swift and asking it to TTS multiple set of text strings. Certainly I am using there below set of instructions to play sound:
builder.continueOnSuccessWith { (awsTask: AWSTask<NSURL>) ->
Any? in
// The result of getPresignedURL task is NSURL.
// Again, we ignore the errors in the example.
let url = awsTask.result!
// Try playing the data using the system AVAudioPlayer
self.audioPlayer.replaceCurrentItem(with: AVPlayerItem(url: url as URL))
self.audioPlayer.play()
return nil
}
While debug I found that replaceCurrentItem is adding a new item to play and ignoring the previous. I would like to have some suggestions as how the polly handle such multiple calls within its framework.
Any help appreciated! Thanks
I could able to make this work by inserting each operation to AVQueuePlayer and playing at last, but I am keen to know how amazon handle's multiple file play in polly

Spotify SDK in Swift 3.0: how to know when a song ends

I'm building a personal app in Swift using the Spotify api, and one thing that it needs to do is to play another song once the song that was playing is over. I have the ids to the songs all ordered the way I want them to be, and I can play them successfully.
I just need to know if there's a way to tell when a song is at its end so I know when to start playing the next one or to do other things (I make a HTTP request every time a song ends for another reason).
Any suggestions or any resources you could point me to is incredibly helpful! Couldn't find an answer through google searches :(
Spotify API: Is there a way to determine when a song has finished playing?
This is a similar question that was posted but it was for javascript and it was never answered.
You can implement the SPTAudioStreamingPlaybackDelegate protocol, set the Spotify player's playbackDelegate property and implement didStopPlayingTrack. An example class would look something like this:
class MyClass: NSObject, SPTAudioStreamingPlaybackDelegate {
var player = SPTAudioStreamingController.sharedInstance()
func setup() { // Whatever function does the setup.
player?.playbackDelegate = self
}
func audioStreaming(_ audioStreaming: SPTAudioStreamingController!, didStopPlayingTrack trackUri: String!) {
playNextSong() // Or whatever else you wish to do here.
}
func playNextSong() {
// ...
}
}
(Note that the setup function could be anything--such as viewDidLoad if the object observing the player is a view controller rather than subclassing NSObject.)
Because didStopPlayingTrack is a delegate function, the Spotify SDK should manage when it gets called (more specifically, the player should call it on its delegate object)--you don't need to call it yourself as long as the player's delegate is set.

Using reopened standard file descriptors in an iOS app with background capabilities?

I would like to be able to redirect my logging statements to a file so that I can retrieve them when my app runs standalone (i.e. is not attached to Xcode). I have discovered (thank you Stackoverflow) that freopen can be used to accomplish this.
If I create a new Xcode project and add the code to redirect stderr then everything works as expected.
However, when I add the redirection code to my existing, bluetooth project I am having trouble. The file is being created and I can retrieve it using iTunes or Xcode's Devices window, but it is of size 0. If I explicitly close the file then the text that I wrote actually makes it into the file. It is as though iOS is not flushing the file when the app is terminated. I suspect that the trouble stems from the fact that I have enabled background processing. Can anyone help me to understand this?
Here is my code:
let pathes = NSSearchPathForDirectoriesInDomains(.DocumentDirectory, .UserDomainMask, true);
let filePath = NSURL(fileURLWithPath: pathes[0]).URLByAppendingPathComponent("Test.log")
freopen(filePath.path!, "a", stderr);
fputs("Hello, Samantha!\r\n", stderr);
struct StderrOutputStream: OutputStreamType {
static let stream = StderrOutputStream()
func write(string: String) {fputs(string, stderr)}
}
var errStream = StderrOutputStream.stream
print("Hello, Robert", toStream: &errStream)
fclose(stderr) // Without this the text does not make it into the file.
I'd leave this as a comment, but have you looked into NSFileHandle? It sounds like you just need a way to append data to the end of a text file, correct?
Once you have a handle with something like NSFileHandle(forWritingToURL:), you can use .seekToEndOfFile() and .writeData(_:). As a side note, you'll need to convert your String to Data before writing it.
Admittedly, this will probably end up being more lines of code, and you'll almost certainly need to take threading into consideration.

Resources