Is there any way to get PCM frames from a song which is playing in Deezer or Spotify and if there is, could you maybe explain briefly how ?
I checked in the both API a way to do that but I'm not very lucky tonight and I didn't find a answer yet... :(
Any kind of help will be very usefull, thanks a lot.
Kind Regards,
Sébastien.
Disclaimer: I work for Spotify
libspotify delivers raw PCM frames in the music_delivery callback, see the API documentation for more details. Actually this is the default delivery mechanism for libspotify, so you don't need to do anything special to get raw PCM, that's the format which the library speaks.
I'm not sure about the Spotify Web Apps platform, I'm not a Javascript guy at all...
Related
I am looking into developing an application to transcribe an audio file for me, then it gives me a document with words or phrases and times spoken, just like YouTube does. I could just upload files to YouTube and then get the transcript but I want to use it offline. Anyone to help? Where can I start?
Not sure about Youtube, but I would start with Google Cloud Speech API, and if you're not happy with it, then I'd go through these 5 as well.
Also, bear in mind that Chrome has Web Speech API built in (and most likely Firefox has something similar, but I never had a need to explore that), so if what you're doing is for web, you should check that out too.
Let us know if this helped.
I'm trying to find a way to get the average power level for a channel, that comes out from the audio played in the embedded video. I'm using YouTube's iOS helper library for embedding the video https://developers.google.com/youtube/v3/guides/ios_youtube_helper
A lot of the answers I've found in StackOverflow refer to AVAudioPlayer, but that's not my case. I also looked in the docs of AudioKit framework to find something that can give the output level of the current audio, but I couldn't find anything related, maybe I missed something over there. I also looked in EZAudio framework even tough it's deprecated, and I also couldn't find something that relates to my case.
My direction of thinking was to find a way to get the actual level that's coming out from the device, but I found one answer in SO that's saying this is not allowed in iOS, although he didn't mention any source for this statement.
https://stackoverflow.com/a/12664340/4711172
So, any help would be much appreciated.
The iOS security sandbox blocks apps from seeing the device's digital audio output stream, or any other app's internal audio output (unless explicitly shared, e.g. inter-app audio, etc.) (when using Apple App store permitted public APIs.)
(Just a guess, but this was possibly and originally implemented in iOS to prevent apps from capturing samples of DRM'd music and/or recording phone call conversations.)
Might be a bit off/weird, but just in case -
Have you considered closing a loop? Meaning - record the incoming audio using 'AVAudioRecorder' and get the audio levels from there?.
See Apple's documentation for AVAudioRecorder (in the overview they're specifying: "Obtain input audio-level data that you can use to provide level metering")
AVAudioRecorder documentation
We are trying to use capture the PCM data from an HLS stream for processing, ideally just before it is played, though just after is acceptable. We want to do all this while still using AVPlayer.
Has anyone done this? For non-HLS streams, as well as local files, this seems to be possible with the MPAudioProcessingTap, but not with HLS. This issue discusses doing it with non-HLS:
AVFoundation audio processing using AVPlayer's MTAudioProcessingTap with remote URLs
Thanks!
Unfortunately, this has been confirmed to be unsupported, at least for the time being.
From an Apple engineer:
The MTAudioProcessingTap is not available with HTTP live streaming. I suggest filing an enhancement if this feature is important to you - and it's usually helpful to describe the type of app you're trying to design and how this feature would be used.
Source: https://forums.developer.apple.com/thread/45966
Our best bet is to file enhancement radars to try to get them to devote some development time towards it. I am in the same unfortunate boat as you.
This is based on the answer you provided for this thread.
Can CMSampleBuffer decode H264 frames?
Can you give more pointers , how you achieved it ?
I am getting a h264 raw stream from socket on my iphone. How to play it ?
hope you will give some hints.
Unfortunately Apple doesn't give us direct access to the hardware, and there's no way (as far as I know) to just get the AVAssetReader to take raw h.264 data. Perhaps somebody has figured it out and would be kind enough to shed some light on it for us.
So the other solution would be to write it to an MP4 file on disk, or switch to HLS as your streaming method. You could also switch to a software decoder. That would solve your problem but give you a new problem in that it will eat up a lot of CPU resources if you're doing HD at a high frame rate.
You can go with FFMPGE or VideoToolKit framework provided by Apple Inc.
I have been searching for a way to stream audio and video for a while. I could find some explanations but not a full tutorial on how to do it. Can anyone please provide a way to do it. Tutorials or sample codes will be very helpful...
Here's a fairly recent blog post on the BlackBerry Developer's Blog about the Streaming Media API, including sample code.