iOS with monotouch upload wav file fails - ios

I'm writing an iOS app with monotouch.
I'm recording a sound file via the AVrecorder. I create a working wav file(which I check).
I want to send it to my server by passing it as a byte array (seems to be the best way)
The problem is that the file passed to the server is corrupted.
This is how I convert wav to byte []. Any ideas what I'm doing wrong?
(I iterate to find the files size once and than copy everything. The reason I do this is that the GetProperty returns a wrong size of file).
string path = GlobalData.UserDefaults.StringForKey(GlobalData.HeykuSoundPathDefault);
AudioFile song = AudioFile.Open(path,AudioFilePermission.Read,AudioFileType.WAVE);
byte [] audioBuffer = new byte[4000000];
int copied = 0;
int offset = 0;
while(true)
{
copied = song.Read(offset,audioBuffer,0,4096,false);
if(copied == -1)
{
break;
}
else
{
offset += copied;
Console.WriteLine(offset);
}
}
audioBuffer = new byte[offset];
copied = song.Read(0,audioBuffer,0,offset,false);

Have you tried simply using streams? Something like this:
using (var fileReader = new StreamReader("filename_here"))
{
using (var webStream = webClient.OpenWrite("http://www.myuploader.com"))
{
fileReader.BaseStream.CopyToAsync(webStream);
}
}
The above method should copy the whole file including the Wave headers. I suspect the AudioFile could be returning you pure PCM audio which will not include headers. This is an example of a header writer function:
private void WriteHeader()
{
this.writer.Seek(0, SeekOrigin.Begin);
// chunk ID
this.writer.Write('R');
this.writer.Write('I');
this.writer.Write('F');
this.writer.Write('F');
this.writer.Write(this.ByteCount);
this.writer.Write('W');
this.writer.Write('A');
this.writer.Write('V');
this.writer.Write('E');
this.writer.Write('f');
this.writer.Write('m');
this.writer.Write('t');
this.writer.Write(' ');
this.writer.Write((int)16);
this.writer.Write((short)1);
this.writer.Write((short)this.ChannelCount);
this.writer.Write((int)this.SampleRate);
this.writer.Write((int)this.SampleRate*2);
this.writer.Write((short)2);
this.writer.Write((short)this.BitsPerSample);
this.writer.Write('d');
this.writer.Write('a');
this.writer.Write('t');
this.writer.Write('a');
this.writer.Write((int)this.byteCount);
}
More info on the file format: https://ccrma.stanford.edu/courses/422/projects/WaveFormat/
As for reading the data from the AudioFile, this would probably work better:
var song = AudioFile.Open(path,AudioFilePermission.Read,AudioFileType.WAVE);
var audioBuffer = new byte[song.Length];
int copied = 0;
while (copied < audioBuffer.Length)
{
copied += song.Read (copied, audioBuffer, copied, 4096, false);
}

Related

AudioConverter#FillComplexBuffer returns -50 and does not convert anything

I'm strongly following this Xamarin sample (based on this Apple sample) to convert a LinearPCM file to an AAC file.
The sample works great, but implemented in my project, the FillComplexBuffer method returns error -50 and the InputData event is not triggered once, thus nothing is converted.
The error only appears when testing on a device. When testing on the emulator, everything goes great and I get a good encoded AAC file at the end.
I tried a lot of things today, and I don't see any difference between my code and the sample code. Do you have any idea where this may come from?
I don't know if this is in anyway related to Xamarin, it doesn't seem so since the Xamarin sample works great.
Here's the relevant part of my code:
protected void Encode(string path)
{
// In class setup. File at TempWavFilePath has DecodedFormat as format.
//
// DecodedFormat = AudioStreamBasicDescription.CreateLinearPCM();
// AudioStreamBasicDescription encodedFormat = new AudioStreamBasicDescription()
// {
// Format = AudioFormatType.MPEG4AAC,
// SampleRate = DecodedFormat.SampleRate,
// ChannelsPerFrame = DecodedFormat.ChannelsPerFrame,
// };
// AudioStreamBasicDescription.GetFormatInfo (ref encodedFormat);
// EncodedFormat = encodedFormat;
// Setup converter
AudioStreamBasicDescription inputFormat = DecodedFormat;
AudioStreamBasicDescription outputFormat = EncodedFormat;
AudioConverterError converterCreateError;
AudioConverter converter = AudioConverter.Create(inputFormat, outputFormat, out converterCreateError);
if (converterCreateError != AudioConverterError.None)
{
Console.WriteLine("Converter creation error: " + converterCreateError);
}
converter.EncodeBitRate = 192000; // AAC 192kbps
// get the actual formats back from the Audio Converter
inputFormat = converter.CurrentInputStreamDescription;
outputFormat = converter.CurrentOutputStreamDescription;
/*** INPUT ***/
AudioFile inputFile = AudioFile.OpenRead(NSUrl.FromFilename(TempWavFilePath));
// init buffer
const int inputBufferBytesSize = 32768;
IntPtr inputBufferPtr = Marshal.AllocHGlobal(inputBufferBytesSize);
// calc number of packets per read
int inputSizePerPacket = inputFormat.BytesPerPacket;
int inputBufferPacketSize = inputBufferBytesSize / inputSizePerPacket;
AudioStreamPacketDescription[] inputPacketDescriptions = null;
// init position
long inputFilePosition = 0;
// define input delegate
converter.InputData += delegate(ref int numberDataPackets, AudioBuffers data, ref AudioStreamPacketDescription[] dataPacketDescription)
{
// how much to read
if (numberDataPackets > inputBufferPacketSize)
{
numberDataPackets = inputBufferPacketSize;
}
// read from the file
int outNumBytes;
AudioFileError readError = inputFile.ReadPackets(false, out outNumBytes, inputPacketDescriptions, inputFilePosition, ref numberDataPackets, inputBufferPtr);
if (readError != 0)
{
Console.WriteLine("Read error: " + readError);
}
// advance input file packet position
inputFilePosition += numberDataPackets;
// put the data pointer into the buffer list
data.SetData(0, inputBufferPtr, outNumBytes);
// add packet descriptions if required
if (dataPacketDescription != null)
{
if (inputPacketDescriptions != null)
{
dataPacketDescription = inputPacketDescriptions;
}
else
{
dataPacketDescription = null;
}
}
return AudioConverterError.None;
};
/*** OUTPUT ***/
// create the destination file
var outputFile = AudioFile.Create (NSUrl.FromFilename(path), AudioFileType.M4A, outputFormat, AudioFileFlags.EraseFlags);
// init buffer
const int outputBufferBytesSize = 32768;
IntPtr outputBufferPtr = Marshal.AllocHGlobal(outputBufferBytesSize);
AudioBuffers buffers = new AudioBuffers(1);
// calc number of packet per write
int outputSizePerPacket = outputFormat.BytesPerPacket;
AudioStreamPacketDescription[] outputPacketDescriptions = null;
if (outputSizePerPacket == 0) {
// if the destination format is VBR, we need to get max size per packet from the converter
outputSizePerPacket = (int)converter.MaximumOutputPacketSize;
// allocate memory for the PacketDescription structures describing the layout of each packet
outputPacketDescriptions = new AudioStreamPacketDescription [outputBufferBytesSize / outputSizePerPacket];
}
int outputBufferPacketSize = outputBufferBytesSize / outputSizePerPacket;
// init position
long outputFilePosition = 0;
long totalOutputFrames = 0; // used for debugging
// write magic cookie if necessary
if (converter.CompressionMagicCookie != null && converter.CompressionMagicCookie.Length != 0)
{
outputFile.MagicCookie = converter.CompressionMagicCookie;
}
// loop to convert data
Console.WriteLine ("Converting...");
while (true)
{
// create buffer
buffers[0] = new AudioBuffer()
{
NumberChannels = outputFormat.ChannelsPerFrame,
DataByteSize = outputBufferBytesSize,
Data = outputBufferPtr
};
int writtenPackets = outputBufferPacketSize;
// LET'S CONVERT (it's about time...)
AudioConverterError converterFillError = converter.FillComplexBuffer(ref writtenPackets, buffers, outputPacketDescriptions);
if (converterFillError != AudioConverterError.None)
{
Console.WriteLine("FillComplexBuffer error: " + converterFillError);
}
if (writtenPackets == 0) // EOF
{
break;
}
// write to output file
int inNumBytes = buffers[0].DataByteSize;
AudioFileError writeError = outputFile.WritePackets(false, inNumBytes, outputPacketDescriptions, outputFilePosition, ref writtenPackets, outputBufferPtr);
if (writeError != 0)
{
Console.WriteLine("WritePackets error: {0}", writeError);
}
// advance output file packet position
outputFilePosition += writtenPackets;
if (FlowFormat.FramesPerPacket != 0) {
// the format has constant frames per packet
totalOutputFrames += (writtenPackets * FlowFormat.FramesPerPacket);
} else {
// variable frames per packet require doing this for each packet (adding up the number of sample frames of data in each packet)
for (var i = 0; i < writtenPackets; ++i)
{
totalOutputFrames += outputPacketDescriptions[i].VariableFramesInPacket;
}
}
}
// write out any of the leading and trailing frames for compressed formats only
if (outputFormat.BitsPerChannel == 0)
{
Console.WriteLine("Total number of output frames counted: {0}", totalOutputFrames);
WritePacketTableInfo(converter, outputFile);
}
// write the cookie again - sometimes codecs will update cookies at the end of a conversion
if (converter.CompressionMagicCookie != null && converter.CompressionMagicCookie.Length != 0)
{
outputFile.MagicCookie = converter.CompressionMagicCookie;
}
// Clean everything
Marshal.FreeHGlobal(inputBufferPtr);
Marshal.FreeHGlobal(outputBufferPtr);
converter.Dispose();
outputFile.Dispose();
// Remove temp file
File.Delete(TempWavFilePath);
}
I already saw this SO question, but the not-detailed C++/Obj-C related answer doesn't seem to fit with my problem.
Thanks !
I finally found the solution!
I just had to declare AVAudioSession category before converting the file.
AVAudioSession.SharedInstance().SetCategory(AVAudioSessionCategory.AudioProcessing);
AVAudioSession.SharedInstance().SetActive(true);
Since I also use an AudioQueue to RenderOffline, I must in fact set the category to AVAudioSessionCategory.PlayAndRecord so both the offline rendering and the audio converting work.

Naudio mvc convert mp3 to wav

I would like to know if it is possible to convert mp3 to wave using nAudio library but without saving converted file to disc (using for example MemoeryStream) ?
Link to nAudio
Any examples?
I tried like this:
byte[] fileStream = null;
MemoryStream ms2 = new MemoryStream();
using (WaveStream waveStream = WaveFormatConversionStream.CreatePcmStream(newMp3FileReader(filePath)))
using (WaveFileWriter waveFileWriter = new WaveFileWriter(ms2,waveStream.WaveFormat))
{
byte[] bytes3 = new byte[waveStream.Length];
waveStream.Position = 0;
waveStream.Read(bytes3, 0, (int)waveStream.Length);
waveFileWriter.Write(bytes3, 0, bytes3.Length);
fileStream = bytes3;
return fileStream;
}
When I saved file from byte array it is damaged.
Screen:
The simplest way to do it is to pass a MemoryStream instance to WaveFileWriter constructor, then write all the samples to the writer... For the decoding side you can use the Mp3FileReader...
Good luck!

Play a Video from MemoryStream, Using FFMpeg

I'm having a hard time, searching how to play a video file from a TMemoryStream (or a similar buffer in memory) using FFMpeg. I've seen many things, including UltraStarDX, expensive FFMpeg components for Delphi and so on.
One component called FFMpeg Vcl Player claims to play video formats from a memory stream. I downloaded the trial version and I guess it uses CircularBuffer.pas for that matter (maybe).
Does any one know how to do this?
Edit:
Now the better question is how to play an encrypted video file, using FFMpeg or similar libraries.
To play video from memory stream, you can use custom AVIOContext.
static const int kBufferSize = 4 * 1024;
class my_iocontext_private
{
private:
my_iocontext_private(my_iocontext_private const &);
my_iocontext_private& operator = (my_iocontext_private const &);
public:
my_iocontext_private(IInputStreamPtr inputStream)
: inputStream_(inputStream)
, buffer_size_(kBufferSize)
, buffer_(static_cast<unsigned char*>(::av_malloc(buffer_size_))) {
ctx_ = ::avio_alloc_context(buffer_, buffer_size_, 0, this,
&my_iocontext_private::read, NULL, &my_iocontext_private::seek);
}
~my_iocontext_private() {
::av_free(ctx_);
::av_free(buffer_);
}
void reset_inner_context() { ctx_ = NULL; buffer_ = NULL; }
static int read(void *opaque, unsigned char *buf, int buf_size) {
my_iocontext_private* h = static_cast<my_iocontext_private*>(opaque);
return h->inputStream_->Read(buf, buf_size);
}
static int64_t seek(void *opaque, int64_t offset, int whence) {
my_iocontext_private* h = static_cast<my_iocontext_private*>(opaque);
if (0x10000 == whence)
return h->inputStream_->Size();
return h->inputStream_->Seek(offset, whence);
}
::AVIOContext *get_avio() { return ctx_; }
private:
IInputStreamPtr inputStream_; // abstract stream interface, You can adapt it to TMemoryStream
int buffer_size_;
unsigned char * buffer_;
::AVIOContext * ctx_;
};
//// ..........
/// prepare input stream:
IInputStreamPtr inputStream = MyCustomCreateInputStreamFromMemory();
my_iocontext_private priv_ctx(inputStream);
AVFormatContext * ctx = ::avformat_alloc_context();
ctx->pb = priv_ctx.get_avio();
int err = avformat_open_input(&ctx, "arbitrarytext", NULL, NULL);
if (err < 0)
return -1;
//// normal usage of ctx
//// avformat_find_stream_info(ctx, NULL);
//// av_read_frame(ctx, &pkt);
//// etc..
You can waste your time rewriting FFMPEG from C++ to Delphi, or mess with wrapper libraries.
Or if you're just interested in playing a video in Delphi, then check out Mitov's VideoLab components.
http://www.mitov.com/products/videolab#components
If you want play Stream from memory you can make a virtual memory. I suggest BoxedAppSdk.
This will help you to make a virtual drive with virtual files that you can write on it and then give the virtual path to the player component that you have.
BoxedApp is not free but it is really awesome and very simple in use!

Can we compress a large file as a chunk data for GZIP in blackberry?

I saw the sample APIas below
public static byte[] compress( byte[] data )
{
try
{
ByteArrayOutputStream baos = new ByteArrayOutputStream();
GZIPOutputStream gzipStream = new GZIPOutputStream( baos, 6, GZIPOutputStream.MAX_LOG2_WINDOW_LENGTH );
gzipStream.write( data );
gzipStream.close();
}
catch(IOException ioe)
{
return null;
}
return baos.toByteArray();
}
But when I tried to compress with a large file with Curve 8900 OS 4.6, I got a "OutOfMemoryError" so I would like to know that how to compress as a chunk small data?
I already tried with this code as below but it doesn't work, compressed file cannot decompress...
file = (FileConnection)Connector.open(_fileOutputPath, Connector.READ_WRITE);
if (!file.exists()) {
file.create();
}
os = file.openOutputStream();
is = FileUtil.getInputStream(_fileInputPath, 0);
int tmpSize = 1024;
byte[] tmp = new byte[tmpSize];
int len = -1;
gzipStream = new GZIPOutputStream( os, 6, GZIPOutputStream.MAX_LOG2_WINDOW_LENGTH );
while((len = is.read(tmp, 0, tmpSize)) != -1) {
gzipStream.write(tmp, 0, len);
}
GZIPOutputStream does not produce a file suitable for use with the gzip command line tool. This is because it doesn't produce the necessary file headers. How did you test decompressing it? You should write a similar Java program that makes use of GZIPInputStream to test, as 'gunzip' is not going to recognize the input.
The problem of the first code sample is that the ByteArrayOutputStream is getting too big for the limited memory of a mobile device.
An option could be to first write to a file (for instance) on SD card.
The second code sample seems fine, but see Michael's answer.

How to achieve minimum size when compressing small amount of data lossless?

I don’t understand the answer to ”Why does gzip/deflate compressing a small file result in many trailing zeroes?”
(Why does gzip/deflate compressing a small file result in many trailing zeroes?)
How would you go about compressing small amount of data ½-2 Kbyte to minimum size in a .NET-environment?
(Runtime is not an issue for me. Can I trade speed for size? Should I use 3rd party products?
Developer license fees are OK, but runtime license not.)
Any suggestions about how I can improve the code below for:
(a) Higher compression ratio?
(b) More proper use of streams?
Here is the C#-code that needs to be improved:
private static byte[] SerializeAndCompress(MyClass myObject)
{
using (var inStream = new System.IO.MemoryStream())
{
Serializer.Serialize< MyClass >(inStream, myObject); // PROTO-buffer serialization. (Code not included here.)
byte[] gZipBytearray = GZipCompress(inStream);
return gZipBytearray;
}
}
private static Byte[] GZipCompress(MemoryStream inStream)
{
inStream.Position = 0;
byte[] byteArray;
{
using (MemoryStream outStream = new MemoryStream())
{
bool LeaveOutStreamOpen = true;
using (GZipStream compressStream = new GZipStream(outStream,
CompressionMode.Compress, LeaveOutStreamOpen))
{
// Copy the input stream into the compression stream.
// inStream.CopyTo(Compress); TODO: "Uncomment" this line and remove the next one after upgrade to .NET 4 or later.
CopyFromStreamToStream(inStream, compressStream);
}
byteArray = CreateByteArrayFromStream(outStream); // outStream is complete first after compressStream have been closed.
}
}
return byteArray;
}
private static void CopyFromStreamToStream(Stream sourceStream, Stream destinationStream)
{
byte[] buffer = new byte[4096];
int numRead;
while ((numRead = sourceStream.Read(buffer, 0, buffer.Length)) != 0)
{
destinationStream.Write(buffer, 0, numRead);
}
}
private static byte[] CreateByteArrayFromStream(MemoryStream outStream)
{
byte[] byteArray = new byte[outStream.Length];
outStream.Position = 0;
outStream.Read(byteArray, 0, (int)outStream.Length);
return byteArray;
}

Resources