How to know the size of one AVPacket in ffmpeg? - memory

I am using ffmpeg library. I want to know how much memory one packet can take.
I debug to check the members in on AVPacket, and none of them seem reasonable, such as AVPacket.size, ec.

If you provide your own data buffer, it needs to have a size of mininum FF_MIN_BUFFER_SIZE. You would then set the AVPacket.size to the allocated size, and AVPacket.data to the memory you've allocated.
Note that all FFmpeg decoding routine will simply fail if you provide your own buffer and it's too small.
The other possibility, is let FFmpeg calculates the optimal size for you.
Then do something like:
AVPacket pkt;
pkt.size = 0;
pkt.data = NULL; // <-- the critical part is there
int got_output = 0;
ret = avcodec_encode_audio2(ctx, &pkt, NULL, &got_output);
and provide this AVPacket to the encoding codec. Memory will be allocated automatically.
You will have to call av_free_packet upon return from the encoder and if got_output is set to 1.
FFmpeg will automatically free the AVPacket content in case of error.

AVPacket::size holds the size of the referenced data. Because it is a generic container for data, there can be no definite answer to the question
how much memory one packet can take
It can actually take from zero to a lot. Everything depends on data type, codec and other related parameters.
From FFmpeg examples:
static void audio_encode_example(const char *filename)
{
// ...
AVPacket pkt;
// ...
ret = avcodec_encode_audio2(c, &pkt, NULL, &got_output);
// ...
if (got_output) {
fwrite(pkt.data, 1, pkt.size, f); // <<--- AVPacket.size
av_free_packet(&pkt);
}

Related

STM32 - Reading I2S to record a .WAV file. Audio choppy, what is causing it?

I'm using an STM32 (STM32F446RE) to receive audio from two INMP441 mems microphone in an stereo setup via I2S protocol and record it into a .WAV on a micro SD card, using the HAL library.
I wrote the firmware that records audio into a .WAV with FreeRTOS. But the audio files that I record sound like Darth Vader. Here is a screenshot of the audio in audacity:
if you zoom in you can see a constant noise being inserted in between the real audio data:
I don't know what is causing this.
I have tried increasing the MessageQueue, but that doesnt seem to be the problem, the queue is kept at 0 most of the time. I've tried different frame sizes and sampling rates, changing the number of channels, using only one inmp441. All this without any success.
I proceed explaining the firmware.
Here is a block diagram of the architecture for the RTOS that I have implemented:
It consists of three tasks. The first one receives a command via UART (with interrupts) that signals to start or stop recording. the second one is simply an state machine that walks through the steps to write a .WAV.
Here the code for the WriteWavFileTask:
switch(audio_state)
{
case STATE_START_RECORDING:
sprintf(filename, "%saud_%03d.wav", SDPath, count++);
do
{
res = f_open(&file_ptr, filename, FA_CREATE_ALWAYS|FA_WRITE);
}
while(res != FR_OK);
res = fwrite_wav_header(&file_ptr, I2S_SAMPLE_FREQUENCY, I2S_FRAME, 2);
HAL_I2S_Receive_DMA(&hi2s2, aud_buf, READ_SIZE);
audio_state = STATE_RECORDING;
break;
case STATE_RECORDING:
osDelay(50);
break;
case STATE_STOP:
HAL_I2S_DMAStop(&hi2s2);
while(osMessageQueueGetCount(AudioQueueHandle)) osDelay(1000);
filesize = f_size(&file_ptr);
data_len = filesize - 44;
total_len = filesize - 8;
f_lseek(&file_ptr, 4);
f_write(&file_ptr, (uint8_t*)&total_len, 4, bw);
f_lseek(&file_ptr, 40);
f_write(&file_ptr, (uint8_t*)&data_len, 4, bw);
f_close(&file_ptr);
audio_state = STATE_IDLE;
break;
case STATE_IDLE:
osThreadSuspend(WAVHandle);
audio_state = STATE_START_RECORDING;
break;
default:
osDelay(50);
break;
Here are the macros used in the code for readability:
#define I2S_DATA_WORD_LENGTH (24) // industry-standard 24-bit I2S
#define I2S_FRAME (32) // bits per sample
#define READ_SIZE (128) // samples to read from I2S
#define WRITE_SIZE (READ_SIZE*I2S_FRAME/16) // half words to write
#define WRITE_SIZE_BYTES (WRITE_SIZE*2) // bytes to write
#define I2S_SAMPLE_FREQUENCY (16000) // sample frequency
The last task is the responsible for processing the buffer received via I2S. Here is the code:
void convert_endianness(uint32_t *array, uint16_t Size) {
for (int i = 0; i < Size; i++) {
array[i] = __REV(array[i]);
}
}
void HAL_I2S_RxCpltCallback(I2S_HandleTypeDef *hi2s)
{
convert_endianness((uint32_t *)aud_buf, READ_SIZE);
osMessageQueuePut(AudioQueueHandle, aud_buf, 0L, 0);
HAL_I2S_Receive_DMA(hi2s, aud_buf, READ_SIZE);
}
void pvrWriteAudioTask(void *argument)
{
/* USER CODE BEGIN pvrWriteAudioTask */
static UINT *bw;
static uint16_t aud_ptr[WRITE_SIZE];
/* Infinite loop */
for(;;)
{
osMessageQueueGet(AudioQueueHandle, aud_ptr, 0L, osWaitForever);
res = f_write(&file_ptr, aud_ptr, WRITE_SIZE_BYTES, bw);
}
/* USER CODE END pvrWriteAudioTask */
}
This tasks reads from a queue an array of 256 uint16_t elements containing the raw audio data in PCM. f_write takes the Size parameter in number of bytes to write to the SD card, so 512 bytes. The I2S Receives 128 frames (for a 32 bit frame, 128 words).
The following is the configuration for the I2S and clocks:
Any help would be much appreciated!
Solution
As pmacfarlane pointed out, the problem was with the method used for buffering the audio data. The solution consisted of easing the overhead on the ISR and implementing a circular DMA for double buffering. Here is the code:
#define I2S_DATA_WORD_LENGTH (24) // industry-standard 24-bit I2S
#define I2S_FRAME (32) // bits per sample
#define READ_SIZE (128) // samples to read from I2S
#define BUFFER_SIZE (READ_SIZE*I2S_FRAME/16) // number of uint16_t elements expected
#define WRITE_SIZE_BYTES (BUFFER_SIZE*2) // bytes to write
#define I2S_SAMPLE_FREQUENCY (16000) // sample frequency
uint16_t aud_buf[2*BUFFER_SIZE]; // Double buffering
static volatile int16_t *BufPtr;
void convert_endianness(uint32_t *array, uint16_t Size) {
for (int i = 0; i < Size; i++) {
array[i] = __REV(array[i]);
}
}
void HAL_I2S_RxHalfCpltCallback(I2S_HandleTypeDef *hi2s)
{
BufPtr = aud_buf;
osSemaphoreRelease(RxAudioSemHandle);
}
void HAL_I2S_RxCpltCallback(I2S_HandleTypeDef *hi2s)
{
BufPtr = &aud_buf[BUFFER_SIZE];
osSemaphoreRelease(RxAudioSemHandle);
}
void pvrWriteAudioTask(void *argument)
{
/* USER CODE BEGIN pvrWriteAudioTask */
static UINT *bw;
/* Infinite loop */
for(;;)
{
osSemaphoreAcquire(RxAudioSemHandle, osWaitForever);
convert_endianness((uint32_t *)BufPtr, READ_SIZE);
res = f_write(&file_ptr, BufPtr, WRITE_SIZE_BYTES, bw);
}
/* USER CODE END pvrWriteAudioTask */
}
Problems
I think the problem is your method of buffering the audio data - mainly in this function:
void HAL_I2S_RxCpltCallback(I2S_HandleTypeDef *hi2s)
{
convert_endianness((uint32_t *)aud_buf, READ_SIZE);
osMessageQueuePut(AudioQueueHandle, aud_buf, 0L, 0);
HAL_I2S_Receive_DMA(hi2s, aud_buf, READ_SIZE);
}
The main problem is that you are re-using the same buffer each time. You have queued a message to save aud_buf to the SD-card, but you've also instructed the I2S to start DMAing data into that same buffer, before it has been saved. You'll end up saving some kind of mish-mash of "old" data and "new" data.
#Flexz pointed out that the message queue takes a copy of the data, so there is no issue about the I2S writing over the data that is being written to the SD-card. However, taking the copy (in an ISR) adds overhead, and delays the start of the new I2S DMA.
Another problem is that you are doing the endian conversion in this function (that is called from an ISR). This will block any other (lower priority) interrupts from being serviced while this happens, which is a bad thing in an embedded system. You should do the endian conversion in the task that reads from the queue. ISRs should be very short and do the minimum possible work (often just setting a flag, giving a semaphore, or adding something to a queue).
Lastly, while you are doing the endian conversion, what is happening to audio samples? The previous DMA has completed, and you haven't started a new one, so they will just be dropped on the floor.
Possible solution
You probably want to allocate a suitably big buffer, and configure your DMA to work in circular buffer mode. This means that once started, the DMA will continue forever (until you stop it), so you'll never drop any samples. There won't be any gap between one DMA finishing and a new one starting, since you never need to start a new one.
The DMA provides a "half-complete" interrupt, to say when it has filled half the buffer. So start the DMA, and when you get the half-complete interrupt, queue up the first half of the buffer to be saved. When you get the fully-complete interrupt, queue up the second half of the buffer to be saved. Rinse and repeat.
You might want to add some logic to detect if the interrupt happens before the previous save has completed, since the data will be overrun and possibly corrupted. Depending on the speed of the SD-card (and the sample rate), this may or may not be a problem.

GpuMat to FFMPEG Encoder

I'm doing some image processing with opencv::cuda so what I end up with is a cv::cuda::GpuMat. I now want to encode it using ffmpeg(so I can choose the encoder to be hardware accelerated or not). Now I wonder if i can somehow keep the data on the GPU for the encoder without downloading it, because that seems to be the bottleneck in my application running multiple threads.
I'm resizing the images with Opencv CUDA so I have less to download. (resizing with sws_scale makes no difference)
cv::cuda::GpuMat currentFrame;
...
cv::cuda::GpuMat resized;
cv::cuda::resize(currentFrame,resized,cv::Size(width*0.75,height*0.75),0,0,cv::INTER_NEAREST);
cv::Mat frameEnc = cv::Mat(resized);
const int stride[] = { static_cast<int>(frameEnc.step[0]) };
sws_scale(swsctx, &frameEnc.data, stride, 0, frameEnc.rows, avframe->data, avframe->linesize);
ret = avcodec_send_frame(codec, avframe);
if(!ret) {
/* rescale packet timestamp */
pkt->duration = 1;
av_packet_rescale_ts(pkt, codec->time_base, vstrm->time_base);
/* write packet */
av_write_frame(outctx, pkt);
}
Now this does work and performs ok, but I really wish I could do something like:
cv::cuda::GpuMat currentFrame;
...
GpuMatToAvFrame(currentFrame,avframe);
ret = avcodec_send_frame(codec, avframe);
if(!ret) {
/* rescale packet timestamp */
pkt->duration = 1;
av_packet_rescale_ts(pkt, codec->time_base, vstrm->time_base);
/* write packet */
av_write_frame(outctx, pkt);
}
where the avframe data is also on the gpu so that I don't download need any transfer between GPU-CPU/CPU-GPU
I think the class cv::cudacodec::VideoWriter could help, once an issue with OpenCV gets fixed. The class allows you to write a GpuMat directly. However I believe that due to a bug in OpenCV, you can't build OpenCV with support for this class. Which means this isn't a great solution now, but might be in the future.

How do I increase the size of EZAudio EZMicrophone?

I would like to use the EZAudio framework to do realtime microphone signal FFT processing, along with some other processing in order to determine the peak frequency.
The problem is, the EZmicrophone class only appears to work on 512 samples, however, my signal requires an FFT of 8192 or even 16384 samples. There doesnt appear to be a way to change the buffer size in EZMicrophone, but I've read posts that recommend creating an array of my target size and appending the microphone buffer to it, then when it's full, do the FFT.
When I do this though, I get large chunks of memory with no data, or discontinuities between the segments of copied memory. I think it may have something to do with the timing or order in which the microphone delegate is being called or memory being overwritten in different threads...I'm grasping at straws here. Am I correct in assuming that this code is being executed everytime the microphone buffer is full of a new 512 samples?
Can anyone suggest what I may be doing wrong? I've been stuck on this for a long time.
Here is the post I've been using as a reference:
EZAudio: How do you separate the buffersize from the FFT window size(desire higher frequency bin resolution).
// Global variables which are bad but I'm just trying to make things work
float tempBuf[512];
float fftBuf[8192];
int samplesRemaining = 8192;
int samplestoCopy = 512;
int FFTLEN = 8192;
int fftBufIndex = 0;
#pragma mark - EZMicrophoneDelegate
-(void) microphone:(EZMicrophone *)microphone
hasAudioReceived:(float **)buffer
withBufferSize:(UInt32)bufferSize
withNumberOfChannels:(UInt32)numberOfChannels {
// Copy the microphone buffer so it wont be changed
memcpy(tempBuf, buffer[0], bufferSize);
dispatch_async(dispatch_get_main_queue(),^{
// Setup the FFT if it's not already setup
if( !_isFFTSetup ){
[self createFFTWithBufferSize:FFTLEN withAudioData:fftBuf];
_isFFTSetup = YES;
}
int samplesRemaining = FFTLEN;
memcpy(fftBuf+fftBufIndex, tempBuf, samplestoCopy*sizeof(float));
fftBufIndex += samplestoCopy;
samplesRemaining -= samplestoCopy;
if (fftBufIndex == FFTLEN)
{
fftBufIndex = 0;
samplesRemaining = FFTLEN;
[self updateFFTWithBufferSize:FFTLEN withAudioData:fftBuf];
}
});
}
You likely have threading issues because you are trying to do work in some blocks that takes much much longer than the time between audio callbacks. Your code is being called repeatedly before prior calls can say that they are done (with the FFT setup or clearing the FFT buffer).
Try doing the FFT setup outside the callback before starting the recording, only copy to a circular buffer or FIFO inside the callback, and do the FFT in code async to the callback (not locked in the same block as the circular buffer copy).

How to convert an FFMPEG AVFrame in YUVJ420P to AVFoundation cVPixelBufferRef?

I have an FFMPEG AVFrame in YUVJ420P and I want to convert it to a CVPixelBufferRef with CVPixelBufferCreateWithBytes. The reason I want to do this is to use AVFoundation to show/encode the frames.
I selected kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange and tried converting it since the AVFrame has the data in three planes
Y480 Cb240 Cr240. And according to what I've researched this matches the selected kCVPixelFormatType. By being biplanar I need to convert it into a buffer that contains Y480 and CbCr480 Interleaved.
I tried to create a buffer with 2 planes:
frame->data[0] on the first plane,
frame->data[1] and frame->data[2] interleaved on the second plane.
However, I'm getting return error -6661 (invalid a) from CVPixelBufferCreateWithBytes:
"Invalid function parameter. For example, out of range or the wrong type."
I don't have expertise on image processing at all, so any pointers to documentation that can get me started in the right approach to this problem are appreciated. My C skills aren't top of the line either so maybe I'm making a basic mistake here.
uint8_t **buffer = malloc(2*sizeof(int *));
buffer[0] = frame->data[0];
buffer[1] = malloc(frame->linesize[0]*sizeof(int));
for(int i = 0; i<frame->linesize[0]; i++){
if(i%2){
buffer[1][i]=frame->data[1][i/2];
}else{
buffer[1][i]=frame->data[2][i/2];
}
}
int ret = CVPixelBufferCreateWithBytes(NULL, frame->width, frame->height, kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange, buffer, frame->linesize[0], NULL, 0, NULL, cvPixelBufferSample)
The frame is the AVFrame with the rawData from FFMPEG Decoding.
My C skills aren't top of the line either so maybe im making a basic mistake here.
You're making several:
You should be using CVPixelBufferCreateWithPlanarBytes(). I do not know if CVPixelBufferCreateWithBytes() can be used to create a planar video frame; if so, it will require a pointer to a "plane descriptor block" (I can't seem to find the struct in the docs).
frame->linesize[0] is the bytes per row, not the size of the whole image. The docs are unclear, but the usage is fairly unambiguous.
frame->linesize[0] refers to the Y plane; you care about the UV planes.
Where is sizeof(int) from?
You're passing in cvPixelBufferSample; you might mean &cvPixelBufferSample.
You're not passing in a release callback. The documentation does not say that you can pass NULL.
Try something like this:
size_t srcPlaneSize = frame->linesize[1]*frame->height;
size_t dstPlaneSize = srcPlaneSize *2;
uint8_t *dstPlane = malloc(dstPlaneSize);
void *planeBaseAddress[2] = { frame->data[0], dstPlane };
// This loop is very naive and assumes that the line sizes are the same.
// It also copies padding bytes.
assert(frame->linesize[1] == frame->linesize[2]);
for(size_t i = 0; i<srcPlaneSize; i++){
// These might be the wrong way round.
dstPlane[2*i ]=frame->data[2][i];
dstPlane[2*i+1]=frame->data[1][i];
}
// This assumes the width and height are even (it's 420 after all).
assert(!frame->width%2 && !frame->height%2);
size_t planeWidth[2] = {frame->width, frame->width/2};
size_t planeHeight[2] = {frame->height, frame->height/2};
// I'm not sure where you'd get this.
size_t planeBytesPerRow[2] = {frame->linesize[0], frame->linesize[1]*2};
int ret = CVPixelBufferCreateWithPlanarBytes(
NULL,
frame->width,
frame->height,
kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange,
NULL,
0,
2,
planeBaseAddress,
planeWidth,
planeHeight,
planeBytesPerRow,
YOUR_RELEASE_CALLBACK,
YOUR_RELEASE_CALLBACK_CONTEXT,
NULL,
&cvPixelBufferSample);
Memory management is left as an exercise to the reader, but for test code you might get away with passing in NULL instead of a release callback.

SDL_Surface to IDirect3DTexture

Can anybody help with converting an SDL_Surface object, a texture loaded from a file, into an IDirect3DTexture9 object.
I honestly don't know why you would ever want to do this. It sounds like a truly horrible idea for a variety of reasons, so please tell us why you want to do this so we can convince you not to ;).
In the meanwhile, a quick overview of how you'd go about it:
IDirect3DTexture9* pTex = NULL;
HRESULT hr = S_OK;
hr = m_d3dDevice->CreateTexture(
surface->w,
surface->h,
1,
usage,
format,
D3DPOOL_MANAGED,
&pTex,
NULL);
This creates the actual texture with the size and format of the SDL_Surface. You'll have to fill in the usage on your own, depending on how you want to use it (see D3DUSAGE). You'll also have to figure out the format on your own - you can't directly map a SDL_PixelFormat to a D3DFORMAT. This won't be easy, unless you know exactly what pixel format your SDL_Surface is.
Now, you need to write the data into the texture. You can't use straight memcpy here, since the SDL_Surface and the actual texture may have different strides. Here's some untested code that may do this for you:
HRESULT hr;
D3DLOCKED_RECT lockedRect;
// lock the texture, so that we can write into it
// Note: if you used D3DUSAGE_DYNAMIC above, you should
// use D3DLOCK_DISCARD as the flags parameter instead of 0.
hr = pTex->LockRect(0, &lockedRect, NULL, 0);
if(SUCCEEDED(hr))
{
// use char pointers here for byte indexing
char* src = (char*) surface->pixels;
char* dst = (char*) lockedRect->pBits;
size_t numRows = surface->h;
size_t rowSize = surface->w * surface->format->BytesPerPixel;
// for each row...
while(numRows--)
{
// copy the row
memcpy(dst, src, rowSize);
// use the given pitch parameters to advance to the next
// row (since these may not equal rowSize)
src += surface->pitch;
dst += lockedRect->Pitch;
}
// don't forget this, or D3D won't like you ;)
hr = pTex->UnlockRect(0);
}

Resources