How to play RTSP url from within app in ios - ios

I have found many suggestion in stack overflow regarding usage of FFmpeg and link of github for DFURTSPPlayer but it is not compiling. But after integrating FFmpeg what I have to write? suppose i am having HTTP urls then I write:
code
moviePath = "http:/path.mp4"
movieURL = NSURL.URLWithString(moviePath!)
moviePlayer = MPMoviePlayerController(contentURL: movieURL)
moviePlayer!.play()
So for using RTSP urls what kind of code should i write?

Here is another post that has an example FFmpeg code that receives an RTSP stream (this one also decodes the stream to YUV420, stores it in pic, then converts the frame to RGB24, stores in picrgb and writes it to a file). So to achieve something similar to what you have for HTTP you should:
1) Write a wrapper Objective-C class for the FFmpeg C code, or just wrap the code in functions/functions that you will call directly from Objective-C code. You should have a way to pass the RTSP url to the class or function and provide a callback for a new frame. In the class/function start a new thread that will actually execute something similar to the code in the example and call a callback for each new decoded frame. NOTE: FFmpeg has a way to perform asynchronous I/O by using your own custom IO context and that would actually allow you to avoid creating the thread, but if you are new to FFmpeg maybe start with the basics and then you can improve your code later on.
2) In the callback update the view or whatever you are using for display with the decoded frame data.

Related

Can Google's Speech API accept an external Video URL?

I recently figured out that Google's Vision API can accept an external image URL and I was curious if anyone knew if Google's Speech could accept an external video URL such as a YouTube video?
The code I have in my mind would look something like this:
def transcribe_gcs(yotube_url):
"""Asynchronously transcribes the audio file specified by the gcs_uri."""
from google.cloud import speech
from google.cloud.speech import enums
from google.cloud.speech import types
client = speech.SpeechClient()
audio = types.RecognitionAudio(uri=youtube_url) # swapped out gcs_uri with youtube_url
config = types.RecognitionConfig(
encoding=enums.RecognitionConfig.AudioEncoding.FLAC,
# sample_rate_hertz=16000,
language_code='en-US')
operation = client.long_running_recognize(config, audio)
print('Waiting for operation to complete...')
response = operation.result(timeout=90)
# Each result is for a consecutive portion of the audio. Iterate through
# them to get the transcripts for the entire audio file.
for result in response.results:
# The first alternative is the most likely one for this portion.
print(u'Transcript: {}'.format(result.alternatives[0].transcript))
print('Confidence: {}'.format(result.alternatives[0].confidence))
I was curious if anyone knew if Google's Speech could accept an
external video URL such as a YouTube video?
It needs to be a local path to your audio file (less than 1 min audio file) or GCS URI for audio file longer than 1 minute. What you're thinking is not possible, the audio/video file needs to be in GCS.
I think you can achieve this by streaming same video (for example on wowza or on any server of your choice.) and then simply extract audio using lets say ffmpeg and pass this to google. It should work. use StreamingRecognizeRequest instead of RecognitionAudio.

Value of type 'AVCaptureFileOutput' has no member 'delegate'

The documentation https://developer.apple.com/reference/avfoundation/avcapturefileoutput indicates a delegate property exists for AVCaptureFileOutput.
But the following code
let vfo = AVCaptureFileOutput()
vfo.delegate = self
give the error "Value of type 'AVCaptureFileOutput' has no member 'delegate'"
I am looking to use a AVCaptureFileOutputDelegate for a AVCaptureMovieFileOutput instance.
Any pointer will be helpful.
Follow the link to the delegate property on the page you quoted (or look at the #ifs around it in the header file), and you'll notice that property is for macOS only, not iOS. Thus, when you're in a project targeting iOS, that property doesn't exist.
iOS doesn't let you both receive sample buffers during capture and record to a file with the same session -- you can have an AVCaptureVideoDataOutput or an AVCaptureMovieFileOutput, but not both. If you just want delegate callbacks about movie file capture progress, use startRecording(toOutputFileURL:recordingDelegate:) and adopt AVCaptureFileOutputRecordingDelegate instead. If you want sample buffers, use AVCaptureVideoDataOutput to receive them and AVAssetWriter for lower-level file output.
Thank you for the pointer to AVAssetWriter. I was able to find RosyWriter sample https://developer.apple.com/library/content/samplecode/RosyWriter/Introduction/Intro.html. The modified CaptureOutput:didOutputSampleBuffer to capture the audio averagePowerLevel did the trick of getting a recorded movie and getting simultaneous audio levels.
But is there a more striped down example of its use? My attempts to strip out the renderers, which do the video manipulation, have only broken the sample.

Working Audio Loop Example in Dart

I'm trying to use Dart to get an OGG file to loop using the HTML5 <audio> element. Does anyone have a working example for this. I'm specifically having trouble getting the audio to loop.
I was not able to have a fully controlled loop using the html5 AudioElement; sometimes the loop option was simply not working, sometimes there was a gap, sometimes patterns would overlap.
I had better chance using WebAudio using something like:
source = audioContext.createBufferSource();
source.buffer = buffer;
gainNode = audioContext.createGain();
gainNode.gain.value = 1;
source.connectNode(gainNode);
gainNode.connectNode(audioContext.destination);
// play it now in loop
source.start(audioContext.currentTime);
source.loop = true;
I was not able to load the source buffer from the html audio element which could have been a solution for the CORS issues I had. The samples were loaded using http requests.
I created a dartpad example that demonstrates looping using AudioElement native loop feature and WebAudio
https://dartpad.dartlang.org/879424bca794c63698b0

ROKU-Using urlTransfer to Call Script File

Not sure how Roku and Brightscript actually works. I need to call a script file just before the channel starts to stream. The script file will convert the stream on fly. I asked how to do this in Roku forum and was told to use urlTransfer. Well, the sdk gives little help that I can see when explaining how to. I ran across this post on stackoverflow:
How to make api request to some server in roku
It gives a good example which I think I understand. My confusion comes in where and how the function is called. It has to happen right before the video url is called so the conversion can start.
Any advice appreciated.
If you are using roVideoPlayer then just before you call the play function and if you are using roVideoScreen then just before the show function.
Example snippet:
roVideoPlayer
player=CreateObject('roVideoPlayer')
* Your code to add content for the player
* Your call to script
player.play()
roVideoScreen
player=CreateObject('roVideoScreen')
* Your code to add content for the player
* Your call to script
player.show()
Hope this helps

How to extract the song name from a live audio stream on the Blackberry Storm?

HI
I am new to Blackberry.
I am developing an application to get the song name from the live audio stream. I am able to get the mp3 stream bytes from the particular radioserver.To get the song name I add the flag "Icy-metadata:1".So I am getting the header from the stream.To get the mp3 block size I use "Icy-metaInt".How to recognize the metadatablocks with this mp3 block size.I am using the following code.can anyone help me to get it...Here the b[off+k] is the bytes that are from the server...I am converting whole stream in to charArray which is wrong, but how to recognize the metadataHeaders according to the mp3 block size..
b[off+k] = buffers[PlayBuf]PlayByte];
String metaSt = httpConn.getHeaderField("icy-metaint");
metaInt=Integer.parseInt(metaSt);
for (int i=0;i<b[off+k];i++)
{
metadataHeader+=(new String(b)).toCharArray();
System.out.println(metadataHeader);
metadataLength--;
Blackberry has no native regex functionality; I would recommend grabbing the regexp-me library (http://code.google.com/p/regexp-me/) and compiling it into your code. I've used it before and its regex support is pretty good. I think the regex in the code you posted would work just fine.

Resources