How to play mp3 content from resource in Delphi? - delphi

I have a TResourceStream contains a simple WAV sound.
I wrote that line into the resource.res file:
sound WAV "res\notify.wav"
I have the following method that works with WAV:
Res := TResourceStream.Create(HInstance, 'sound', 'WAV');
Res.Position := 0;
SndPlaySound(res.Memory, SND_MEMORY or SND_ASYNC);
Res.free;
I converted the WAV into MP3 and did the following things:
resource.res file: sound MP3 "res\notify.MP3"
Changed the play method first line to:
Res := TResourceStream.Create(HInstance, 'sound', 'MP3');
But nothing happens. It doesn't throw any exception, simple no sound heard.
How can I play MP3 as simple as WAV files?

The SndPlaySound API only supports waveform audio. It is not a general purpose multi-media API and as such does not (directly nor easily) support MP3 playback.
To play your audio through this API you would first need to decode the MP3 into the waveform format that the API expects.
(I should note that it appears to be possible to get the SndPlaySound API to play MP3 data by attaching a WAV header to the data. But detailed information about the audio is required in that header and the process is a decidely non-trivial exercise. It is almost certainly harder than using an API more suited to the task from the start.)
Your approach appears to be correct for obtaining a stream containing your MP3 data and with that data in memory there are a number of options available for playing that MP3 audio.
The BASS Audio Library is one such option though this is a commercial library and is not particularly cheap. It is however capable of exactly what you need.
There are numerous alternatives however, some cheaper, some even free which might also do the job though you may find it harder to get assistance with these if they are not as widely used or as well supported.
Even so, you might wish to review some of the alternatives listed on the torry.net directory. Specifically in the Components \ Effects and Multimedia section of the catalog.

Related

FFmpeg save stream to mp3

I have an iOS project that play online radio streams, it is use FFmpeg to play. Also I added ability to record streams, decode streams via avcodec_decode_audio4 function, and write output to .wav file. But this files are too big, because it is uncompressed format, so I want to decode files to .mp3.
I have found couple ways to convert audio but only when audio it is ready file, but I want decode to some compressed format as soon as I get chunk of data from stream, not ready file.
Is it possible?
Can you give me some advise how to achieve this?
You can use ffmpeg (aka libav) to encode the audio you're reading with avcodec_decode_audio4 into a file as mp3, as long as libav was configured with lame (--enable-libmp3lame).
Basically, you configure an mp3 codec, then call avcodec_encode_audio2 (who names these things?) on the progressive output of avcodec_decode_audio4.
The canonical example can be confusing because it also deals with video, but you should be able to tease the details you want out of it.
This post on transcoding audio by arashafiei is broadly helpful.

Audio format to choose for Big audio files

Which audio file format is best to use for large audio files? I have many large audio files to be used in my app but their current mp3 size is of hundred of MB's
If you want to save more storage on audio files, file format may not change too much on the file size, reducing the bit rate(for example 320Kbps to 128Kbps) can reduce the file size significantly.
:how to do it using microsofts audio compression manager?(practically its not well documented in m.s.d.n.
Windows provide codecs that compress specifically audio files. The audio files tipically are PCM format (WAVE_FORMAT_PCM) and get played by using the simplest directsound method (check msdn it`s at hand and it works)
To play a file using directsound, thus PCM format you first create a directsound object, create a directsoundbuffer, and then pump the PCM data directly to the buffer using a keep-fill-buffer algorithm.
If you wish to use codecs, u try and write a procedure that opens a stream file and passes it through a acm driver object, thus (de)compressing it.
The driver for ACM (audio compression manager) finds a codecs that suits the input source and decompresses it yet again to WAVE_FORMAT_PCM for your app be able to play it.

AUGraph setup on iOS

I am designing an AUGraph for an iOS application and would appreciate help on the following things.
If I want to play a number of audio files at once, does each file need an audio unit?
From the Core-Audio docs
Linear PCM and IMA/ADPCM (IMA4) audio You can play multiple linear PCM or IMA4 format sounds simultaneously in iOS without incurring CPU resource problems.
AAC, MP3, and Apple Lossless (ALAC) audio Playback for AAC, MP3, and Apple Lossless (ALAC) sounds uses efficient hardware-based decoding on iPhone and iPod touch. You can play only one such sound at a time.
So multiple AAC or MP3 files cannot be played at the same time. What is the optimal LPCM format to play multiple sounds at once?
Does this apply to Audio-Units too, as this in under the AudioQueue documentation.
Can an audio unit in an AUGraph be inactive? If an AUGraph looks like this
Speaker/output < recorder unit < mixer unit < number of audio file playing units
what happens if the recorder is not active, would it still pull, but just not write the buffers to a file?
No; you need to use the mixer audio unit. Check this:
http://developer.apple.com/library/ios/DOCUMENTATION/MusicAudio/Conceptual/AudioUnitHostingGuide_iOS/ConstructingAudioUnitApps/ConstructingAudioUnitApps.html#//apple_ref/doc/uid/TP40009492-CH16-SW1
Mostly reading the document above, wrapping the sample code in a class and creating a pair of utility structures, I coded this 'Simple Sound Engine' from scratch:
ttp://nicolasmiari.com/blog/a-simple-sound-engine-for-ios-using-the-audio-unit-framework/
(Link to article in my blog containing the source code). Sorry, moved blog to Jekyll/Github and this article didn't make the cut.
...I was going to start a repo on github, but it's too much trouble. I am a visual guy, still pretty much git-phobic. Okay, that was a long time ago... Now I use git from the command line :-)
You can use it as-is, or extract the Audio Unit-related code and adapt it to your project.
I believe the Cocos Denshion 'Simple Audio Engine' does pretty much the same thing, but haven't checked the source code.
Known issues
If you have an exception breakpoint set for C++ exceptions, when debugging, the code will stop 2 or 3 times on AUGraphInitialize(). This is a 'non-crashing' exception, so you can click on continue and the code works OK.
To convert your wav files to the uncompressed .caf format, use this command on the Terminal:
%afconvert -f caff -d LEI16 mysoundFile.wav mySoundFile.caf
EDIT: So I created a GitHub repo after all:
https://github.com/nicolas-miari/Sound-Engine
Both ordinary common .wav and .caf files contain raw PCM audio samples, and can be played without hardware assist or DSP processing if already at the destination sample rate.
When there's no audio file or other synthesized data to feed an audio unit that's pulling buffers, the usual practice is to feed it buffers of silence (or perhaps a taper to zero if the previous buffer ended with non-zero amplitude).

iOS: Obtaining raw PCM data from audio file

I'm investigating a straightforward task:
open an audio file from the iPhone's 'iPod audio library'
allows the user to select a chunk by setting two markers: start and end time
time-reverse this chunk
save it as a new file
What are my options?
I will list the results of a couple of hours of research: ( forgive the mess, I will as always tidy pu once I have figured it out )
http://lists.apple.com/archives/coreaudio-api/2005/May/msg00096.html <-- 'I'm currently trying to create a program that plays back audio using an AUAudioFilePlayer
AudioUnit plugin that streams the audio to an output AudioUnit'
AUFilePlayer
http://lists.apple.com/archives/coreaudio-api/2008/Dec/msg00156.html
http://zerokidz.com/audiograph/docs/audiograph.pdf <-- this possibly links to code that does it, but it says it is in beta
When reading audio file with ExtAudioFile read, is it possible to read audio floats not consecutively? <-- this leads to an OS X project that reads an audio file from disk into memory; looking through the code leads us to:
https://developer.apple.com/library/mac/#documentation/MusicAudio/Reference/ExtendedAudioFileServicesReference/Reference/reference.html
as far as I can see the audio Graph project attempts to stream the audio from file in real-time, whereas Stephan's Project just exposes the audio; however it looks like he is using obsolete API calls.
this looks like the right code ( apart from the fact that there seems to be a bug in it ): https://stackoverflow.com/questions/8533143/decoding-mp3-files-by-extaudiofileopenurl
http://cocoadev.com/forums/discussion/499/core-audio/p1
https://developer.apple.com/library/ios/#samplecode/iPhoneExtAudioFileConvertTest/Introduction/Intro.html <-- here is an official Apple sample project that could probably be modified to get what I'm after
I believe you can use IphoneExtAudioFileConvertTest like you said to get what you need, after the user marks what times he wants, first thing you want to do is convert to PCM, next you need to find out which packets are the ones you need, write that to an audio file and then recompress... I wrote an answer here on how to get x amount of seconds from an audio file, ive only tested it with mp3 and m4a, but it can be adapted to PCM (pcm should be easier to do since its linear).
Daniel

How to implement the Adobe HTTP Streaming spec without using their Streaming server

As of Flash 10.1, they have added the ability to add bytes into the NetStream object via the appendBytes method (described here http://www.bytearray.org/?p=1689). The main reason for this addition is that Adobe is finally supporting HTTP streaming of video. This is great, but it seems that you need to use the Adobe Media Streaming Server (http://www.adobe.com/products/httpdynamicstreaming/) to create the correct video chunks from your existing video to allow for smooth streaming.
I have tried to do a hacked version of HTTP streaming in the past where I swap out the NetStream objects (similar to here http://video.leizhu.com/video.html), but there is always a momentary pause between the chunks. With the new appendBytes, I tried to do a quick mock up with the two sections of video from the preceding site, but even then, the skip still remains.
Does anyone know how the two consecutive .FLV files needs to be formated in order for the appendBytes method on the NetStream object to create a nice smooth video without a noticeable skip between the segments?
I was able to get this working using Adobe's File Packager Tool which Samuel described. I didn't use the NetStream object but I used the OSMF Sample Player which I assume uses this internally. Here's how to do with without using FMS:
Get Adobe's File Packager for Http Dynamic Streaming from http://www.adobe.com/products/httpdynamicstreaming/
Run the File Packager on an existing MP4 file containing H.264/AAC like this:
C:\Program Files\Adobe\Flash Media Server 4\tools\f4fpackager>
f4fpackager.exe --input-file="MyFile.mp4" --segment-duration=30
This will result in 30 second long F4F files, also F4X and a F4M file. The F4F files are your correctly segmented (and fragmented) MP4 files that should play.
If you want to test this using the OSMF Player also do the following:
Get Apache Server
Get Adobe's Http Origin Module for Apache from http://www.adobe.com/products/httpdynamicstreaming/
Install the module according to http://help.adobe.com/en_US/HTTPStreaming/1.0/Using/WS8d6ed60bd880807c48597a9e1265edd6cc0-8000.html
Put the F4F, F4X and F4M file into the vod directory under httpdocs
Get the “OSMF Sample Player for HTTP Dynamic Streaming” from http://www.osmf.org/downloads/OSFMPlayer_zeri2.zip
Put the Sample Player in the httpdocs directory
Load the html file from the Sample Player in a browser eg http://localhost/OSMFPlayer.html
Press the eject button and put in the URL of your F4M file, it should play
So to answer the original question Adobe's File Packager is the file splitter to use, you don't need to buy FMS to use it and it works for FLV and MP4/F4V files.
You don't need to use their server. Wowza supports Adobe's version of HTTP Streaming and you can implement it yourself by segmenting the videos properly and loading all the segments on a standard HTTP server.
Links to all the specs for Adobe's HTTP Streaming are here:
http://help.adobe.com/en_US/HTTPStreaming/1.0/Using/WS9463dbe8dbe45c4c-1ae425bf126054c4d3f-7fff.html
Trying to hack the client to do some custom style http streaming will be a lot more troublesome.
Note that HTTP Streaming does not support streaming several different videos but streams a single file that was broken off into separate segments.
File Packager
A command-line tool that translates on-demand media files into fragments and writes the fragments to F4F files. The File Packager is an offline tool. You can use the File Packager to encrypt files for use with Flash Access. For more information, see Packaging on-demand media.
The File Packager is available from adobe.com and is installed with Adobe® Flash® Media Server to the rootinstall/tools/f4fpackager folder.
Packager download link is on right here: Download File Packager for HTTP Dynamic Streaming
http://www.adobe.com/products/httpdynamicstreaming/
You could use F4Pack, it's a GUI around the commandline-tool from Adobe, that lets you process your flv/f4v file so they can be used for HTTP Dynamic Streaming.
The place in the OSMF code where this happens is the timer-fired state machine inside of the HTTPNetStream class implementation... might be an informative read. I think I even put some helpful comments in there when I wrote it.
As far as the general question:
If you read an entire FLV file into a ByteArray and pass it to appendBytes, it will play. If you break that FLV file in half, and pass the first half as a byte array and then the second half as a byte array, that will play as well.
If you want to be able to switch around between bitrates without a gap, you need to split up your FLV files at matching keyframe points... and remember that only the first call to appendBytes has the initial FLV file header ('F', 'L', 'V', flags, offset)... the rest just expect a continuation of the FLV byte sequence.
I recently found a similar project for node.js to achieve m3u8 transcoding (https://github.com/andrewschaaf/media-server) but have yet to hear of one besides Wowza doing it outside of Origin module for Apache. Since the payloads are nearly identical you're better off looking for a good mp4 segmenting solution (plenty out there) than looking for f4m segmenting. The problem is moov atoms especially on larger mp4 video are difficult to manage and put in their proper initial (near beginning of file) location. Even using optimal ffmpeg settings and 'qtfaststart' you end up with noticeably slower seeking, inefficient bandwidth usage (usually greedy), and a few minor headaches relating to scrubbing/time that you don't get with flv/f4v playback.
In my player I have or intend to switch between HTTP Dynamic Streaming (HDS) and MP4 based on load and realtime log parsing Apache using awk/cron instead of licensing Adobe's Access product for stream protection .. both have unique 'onmetadata' handlers.. but in the end I receive sequenced time/byte hashes virtually equivalent. Just MP4 is slower. So mod_origin is just a synchronizer / request router for Flash clients (over http). I'm still looking for ways to speed up mp4-container-based playback. One incredible solution I read this recently and was rather awestruck by it http://zehfernando.com/2011/flash-video-frame-time-woes/ where a video editor (guy) and flash developer came up with their own mp4 timecoding solution that literally added (via Adobe Premiere script) about 50 pixels to the bottom of every video frame with a visual 'binary' stamp like a frame barcode.. and those binary values translate into highly-accurate timecode values. So Flash could analyze the video frames as they were painted (realtime) and determine precisely where the player was and what bytes were needed from any kind of mp4 byte-segmenting-friendly webserver. The thing is (and perhaps I'm wrong here) Flash seems to arbitrarily choose when it gets to moov data, especially on large video files (.5-1.5gigs). Even if you make sure to run your mp4 through MP4Box (i.e. MP4Box -frag 10000 -inter 0 movie.mp4) I guess this has been a problem OSMF and HDS have worked on quite well
now, though it is annoying that you need Apache and a proprietary closed-source module to use it imo. Its probably just a matter of time before open source implementations arrive as HDS is only 1-2 years old, and it just needs a little reverse engineering like that Andrew Chaaf guy with node.js + mpegts streaming (live or not).
In the end I may just end up using OSMF exclusively beneath my UI as it seems to have similar virtues to HDS if not more so i.e. Strobe if you need sick extensible HDS or MP4 open player platform to hack from to realize your own custom player.
Adobe's F4F format is based on MP4 files, are you able to use F4V or MP4 instead of FLV files?
There are plenty of MP4 file splitters around but you would need to make sure the timestamps in the files are continuous, maybe the pause happens when it sees a zero timestamp within the audio or video stream inside the file.

Resources