Working Audio Loop Example in Dart - dart

I'm trying to use Dart to get an OGG file to loop using the HTML5 <audio> element. Does anyone have a working example for this. I'm specifically having trouble getting the audio to loop.

I was not able to have a fully controlled loop using the html5 AudioElement; sometimes the loop option was simply not working, sometimes there was a gap, sometimes patterns would overlap.
I had better chance using WebAudio using something like:
source = audioContext.createBufferSource();
source.buffer = buffer;
gainNode = audioContext.createGain();
gainNode.gain.value = 1;
source.connectNode(gainNode);
gainNode.connectNode(audioContext.destination);
// play it now in loop
source.start(audioContext.currentTime);
source.loop = true;
I was not able to load the source buffer from the html audio element which could have been a solution for the CORS issues I had. The samples were loaded using http requests.
I created a dartpad example that demonstrates looping using AudioElement native loop feature and WebAudio
https://dartpad.dartlang.org/879424bca794c63698b0

Related

Google cloud speech very inaccurate and misses words on clean audio

I am using Google cloud speech through Python and finding many transcriptions are inaccurate and missing several words. This is a simple script I'm using to return a transcript of an audio file, in this case 'out307.wav':
client = speech.SpeechClient()
with io.open('out307.wav', 'rb') as audio_file:
content = audio_file.read()
audio = speech.types.RecognitionAudio(content=content)
config = speech.types.RecognitionConfig(
enable_word_time_offsets=True,
language_code='en-US',
audio_channel_count=1)
response = client.recognize(config, audio)
for result in response.results:
alternative = result.alternatives[0]
print(u'Transcript: {}'.format(alternative.transcript))
This returns the following transcript:
to do this the tensions and suspicions except
This is very far off what the actual audio says (I've uploaded it at https://vocaroo.com/i/s1zdZ0SOH1Ki). The audio is a .wav and very clear with no background noise. This is worse than average, as in some cases it will get the transcription fully correct on a 10 second audio file, or it may miss just a couple of words. Is there anything I can do to improve results?
This is weird, I tried your audio file with your code and I get the same result, but, if I change the language_code to "en-UK" I am able to get the full response.
I'm working for Google Cloud and I created for you a public issue here, you can track there the updates.

How to play RTSP url from within app in ios

I have found many suggestion in stack overflow regarding usage of FFmpeg and link of github for DFURTSPPlayer but it is not compiling. But after integrating FFmpeg what I have to write? suppose i am having HTTP urls then I write:
code
moviePath = "http:/path.mp4"
movieURL = NSURL.URLWithString(moviePath!)
moviePlayer = MPMoviePlayerController(contentURL: movieURL)
moviePlayer!.play()
So for using RTSP urls what kind of code should i write?
Here is another post that has an example FFmpeg code that receives an RTSP stream (this one also decodes the stream to YUV420, stores it in pic, then converts the frame to RGB24, stores in picrgb and writes it to a file). So to achieve something similar to what you have for HTTP you should:
1) Write a wrapper Objective-C class for the FFmpeg C code, or just wrap the code in functions/functions that you will call directly from Objective-C code. You should have a way to pass the RTSP url to the class or function and provide a callback for a new frame. In the class/function start a new thread that will actually execute something similar to the code in the example and call a callback for each new decoded frame. NOTE: FFmpeg has a way to perform asynchronous I/O by using your own custom IO context and that would actually allow you to avoid creating the thread, but if you are new to FFmpeg maybe start with the basics and then you can improve your code later on.
2) In the callback update the view or whatever you are using for display with the decoded frame data.

Stop AUGraph's stuttering

I am receiving a stuttered sound when I first start the AUGraph and play a song with a kAudioUnitSubType_AudioFilePlayer component. The stutter is about 3 seconds but its enough to bother me plus I notice that music stops for a split second sometimes while playing(I guess to buffer?). I have tried changing the kAudioUnitProperty_ScheduledFilePrime to random values but notice no change.
What variables or values should I be looking to change to get rid of this flaw? Is this an issue with the stream format?
I am using the YBAudioUnit from https://github.com/ronaldmannak/YBAudioFramework/tree/master/YBAudioUnit
Code:
YBAudioFilePlayer:
- (void)setFileURL:(NSURL *)fileURL typeHint:(AudioFileTypeID)typeHint {
if (_fileURL) {
// Release old file:
AudioFileClose(_audioFileID);
}
_fileURL = fileURL;
if (_fileURL) {
YBAudioThrowIfErr(AudioFileOpenURL((__bridge CFURLRef)fileURL, kAudioFileReadPermission, typeHint, &_audioFileID));
YBAudioThrowIfErr(AudioUnitSetProperty(_auAudioUnit, kAudioUnitProperty_ScheduledFileIDs, kAudioUnitScope_Global, 0, &_audioFileID, sizeof(AudioFileID)));
// Get number of audio packets in the file:
UInt32 propsize = sizeof(_filePacketsCount);
YBAudioThrowIfErr(AudioFileGetProperty(_audioFileID, kAudioFilePropertyAudioDataPacketCount, &propsize, &_filePacketsCount));
// Get file's asbd:
propsize = sizeof(_fileASBD);
YBAudioThrowIfErr(AudioFileGetProperty(_audioFileID, kAudioFilePropertyDataFormat, &propsize, &_fileASBD));
// Get unit's asbd:
propsize = sizeof(_fileASBD);
AudioUnitGetProperty(_auAudioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 0, &_unitASBD, &propsize);
if (_fileASBD.mSampleRate > 0 && _unitASBD.mSampleRate > 0) {
_sampleRateRatio = _unitASBD.mSampleRate / _fileASBD.mSampleRate;
} else {
_sampleRateRatio = 1.;
}
}
}
To play I call these methods on the YBAudioFilePlayer:
[player1 setFileURL:item.url typeHint:0];
[player1 scheduleEntireFilePrimeAndStartImmediately];
[graph start];//On a YBAudioUnitGraph which is really just a basic AUGraph
More than an answer this is a comment, but it's rather large, so I'll post it here.
I don't have the time and patience to study the code inside the YB.. API. But a couple of thigns come to my mind.
First I remember experimenting with Audio Units (using Apple's API) and I had a lot of stuttering going on. I solved the problem removing all objective-C calls inside the callback that feeds data to my AUGraph (well, I removed all except one that I couldn't get rid of). I replaced all Objective-c calls with pure C and C++ calls. Example:
... this is the render callback
int i = [myClass someProperty]; // obj-c
int i = myClass->someVarialbe; // c, c++
This was just an example, but it improved dramatically and I got rid of stuttering. Maybe you can take a look at the implementation of the YBXX API and see if there are a lot of obj-c calls in the callback, and if there are, I would not use the API.
Second observation. It seems that you're only trying to play an audio file, for which having an AudioGraph is a lot of overhead, you could use a single IO Audio Unit without the Graph.
There are a large number of questions to ask:
First, are you using a compressed audio file? If so, you may need to take into account padding frames (kAudioFilePropertyPacketTableInfo) to get the real number of audio frames in the file. Perhaps try an AIFF, CAF, or WAV file.
Have you made sure no other audio app are running in the background?
Are there any logging messages?
Have you tried posting to their issue page on github?
The final question is why you are trying to use their framework (which hasn't been updated in two years). I would recommend The Amazing Audio Engine. It is actively developed by some of the best audio folks on iOS.

Getting audio visualization using Web Audio API to work on iOS

I'm developing an HTML5 audio player for use specifically on iPhones, and am trying to get an EQ visualizer working. From what I've found there are two ways to set this up:
One where you load the mp3 file on demand using an XMLHttpRequest:
var request = new XMLHttpRequest();
request.open('GET', 'sampler.mp3', true);
request.responseType = 'arraybuffer';
request.addEventListener('load', bufferSound, false);
request.send();
function bufferSound(event) {
var request = event.target;
var buffer = myAudioContext.createBuffer(request.response, false);
source = myAudioContext.createBufferSource();
source.buffer = buffer;
}
You then use the source.noteOn and source.noteOff functions to play and pause the audio. Working this way, I AM able to get the EQ visualization going. BUT, you have to wait until the mp3 file completely loads to start playing, which won't work in our situation.
The other way to do this is to have an <audio> element already on the page, and you get the audio data from that using:
source = myAudioContext.createMediaElementSource(document.querySelector('audio'));
You then use the audio tag's play and pause functions. This solves the loading problem as it allows the media to be played immediately once the page loads... BUT, EQ visualization is gone.
Both methods show the EQ when testing on Chrome (WIN), so there seems to be something specific with iOS/iPhone that isn't allowing me to get the data from an <audio> tag, but will allow me to get it if I load the mp3 file on demand.
...
Any ideas out there?
Unfortunately Safari doesn't properly support MediaElementSource. It's a bug: Why aren't Safari or Firefox able to process audio data from MediaElementSource?

How to extract the song name from a live audio stream on the Blackberry Storm?

HI
I am new to Blackberry.
I am developing an application to get the song name from the live audio stream. I am able to get the mp3 stream bytes from the particular radioserver.To get the song name I add the flag "Icy-metadata:1".So I am getting the header from the stream.To get the mp3 block size I use "Icy-metaInt".How to recognize the metadatablocks with this mp3 block size.I am using the following code.can anyone help me to get it...Here the b[off+k] is the bytes that are from the server...I am converting whole stream in to charArray which is wrong, but how to recognize the metadataHeaders according to the mp3 block size..
b[off+k] = buffers[PlayBuf]PlayByte];
String metaSt = httpConn.getHeaderField("icy-metaint");
metaInt=Integer.parseInt(metaSt);
for (int i=0;i<b[off+k];i++)
{
metadataHeader+=(new String(b)).toCharArray();
System.out.println(metadataHeader);
metadataLength--;
Blackberry has no native regex functionality; I would recommend grabbing the regexp-me library (http://code.google.com/p/regexp-me/) and compiling it into your code. I've used it before and its regex support is pretty good. I think the regex in the code you posted would work just fine.

Resources