Converting raw pcm to speex? - actionscript

For latency issues, I would like to send speex encoded audio frame data to a server instead of the raw PCM like I'm sending right now.
The problem is that I'm doing this in flash, and I want to use a socket connection to stream encoded spx frames of data.
I read the speex manual and it unfortunately does not go over the actual CELP algorithm used to convert pcm to spx data, it briefly introduces the use of excitation gains and how it grabs the filter coefficients.
It's libraries are in dlls- dead ends.
I really would like to create a conversion class in actionscript. Is this possible? Is there any documentation on this? I've been googling to no avail. You'd think there would be more documentation on speex out there...
And if I can't do this, what would be the most documente audio format to use?
thanks

Related

Decoding h264 on ios

This is based on the answer you provided for this thread.
Can CMSampleBuffer decode H264 frames?
Can you give more pointers , how you achieved it ?
I am getting a h264 raw stream from socket on my iphone. How to play it ?
hope you will give some hints.
Unfortunately Apple doesn't give us direct access to the hardware, and there's no way (as far as I know) to just get the AVAssetReader to take raw h.264 data. Perhaps somebody has figured it out and would be kind enough to shed some light on it for us.
So the other solution would be to write it to an MP4 file on disk, or switch to HLS as your streaming method. You could also switch to a software decoder. That would solve your problem but give you a new problem in that it will eat up a lot of CPU resources if you're doing HD at a high frame rate.
You can go with FFMPGE or VideoToolKit framework provided by Apple Inc.

AVAssetReader with streamed H.264 samples

I'm writing an RTSP/H.264 client. Live555 for parsing the RTSP is great, but using ffmpeg for software decoding is just too slow. I'd like to use AVFoundation to hardware decode the samples. I'm not sure how to do this. My question is, is there any way to get AVFoundation (AVAssetReader?) to decode these samples as they come in and display the feed on-screen?
From now the media sample encoded with H264 comes from memory can't use hardware decode, because iOS doesn't open these interfaces, you can only decode local file or by HTTP Live Streaming. However, there is a possible solution that write every sample into a separate mp4 file, then read it with AVAssetReader, but I didn't try that, maybe speed is a limit.
This may at least get you started
https://github.com/mooncatventures-group/FFPlayer-tests

Mixing and equalizing multiple streams of compressed audio on iOS

What I'm trying to do is exactly as the title says, decode multiple compressed audio streams/files - it will be extracted from a modified MP4 file - and do EQ on them in realtime simultaneously.
I have read through most of Apple's docs.
I have tried AudioQueues, but I won't be able to do equalization, as once the compressed audio goes in, it doesn't come out ... so I can't manipulate it.
Audio Units don't seem to have any components to handle decompression of AAC and MP3 - if I'm right it's converter only handles converting from one LPCM format to another.
I have been trying to work out a solution on and off for about a month and a half now.
I'm now thinking, use a 3rd party decoder (god help me; I haven't a clue how to use those, the source code is greek; oh and any recommendations? :x), then feed the decoded-to LPCM into AudioQueues doing EQ at the callback.
Maybe I'm missing something here. Suggestions? :(
I'm still trying to figure out Core Audio for my own needs, but from what I can understand, you want to use Extended Audio File Services which handles reading and compression for you, producing PCM data you can then hand off to a buffer. The MixerHost sample project provides an example of using ExtAudioFileOpenURL to do this.

Using DirectX api to view h264 stream decoded by FFMPEG

I am trying to stream a video between two clients.
Client A shall upstream the video to a server in h264 format and Client B shall downstream it from the server. To downstream, I am using FFMPEG to decode the NAT over RTP packages.
My problem is that I must display the image using the DirectX API which requires parameters:
bitstream
picture parameters
quantization matrix
slice info.
On the other hand, the resulting parameters from downstreaming with FFMPEG are SPS (Sequence Parameter Set) and PPS (Picture Parameter Set).
I assume that FFMPEG's PPS and DirectX's "picture parameters" are at least tangentially related, however I'm not sure how to obtain the remaining parameters (bitstream, quant_matrx and slce_info) from PPS and SPS.
Any suggestions (barring those that send me back to Google whence I wearily trudge after two days worth of searches) are greatly appreciated.
Regards
-E
Sounds like you're trying to use a DirectX interface that wants encoded video, not decoded video as you should be getting from ffmpeg. You should have a series of decoded frames you need to simply display via DirectX/DirectShow.
If you want to have DirectX and/or the video driver/hardware decode it, you need to find the right interface to submit it to.
I'm afraid your question is lacking in detail needed to give any better answer.

Snapshot using vlc (to get snapshot on RAM)

I was planning to use the vlc library to decode an H.264 based RTSP stream and extract each frame from it (convert vlc picture to IplImage). I have done a bit of exploration of the vlc code and concluded that there is a function called libvlc_video_take_snapshot which does a similar thing. However the captured frame in this case is saved on the hard disk which I wish to avoid due to the real time nature of my application. What would be the best way to do this? Would it be possible without modifying the vlc source (I want to avoid recompilation if possible). I have heard of vmem etc but could not really figure out what it does and how to use it.
The picture_t structure is internal to the library, how can we get an access to the same.
Awaiting your response.
P.S. Earlier I tried doing this using FFMPEG, however the ffmpeg library has a lot of issues while decoding an H.264 based RTSP stream on windows and hence I had to switch to VLC.
Regards,
Saurabh Gandhi

Resources