unable to read an AVI file in OpenCV - opencv

I am trying to open an AVI file using OpenCV, Here is the code line which i used to read the video file.
CvCapture* capture = cvCaptureFromAVI(video_file_name);
But it returns a cvCapture ponter which points to an address of 0x00000000 (hence an invalid pointer)
Can this be a codec related issue? missing ffmpeg?

Problem must be with the ffmpeg codec. there must be a mismatch between the binaries that causes a variable called icvCreateFileCapture_FFMPEG_p to be null, and logical check on this variable terminates the function before trying to read the avi file

Related

Failed to Create Sound

Very simple issue but I have no idea if anyone can help me with it. Perhaps it is a bug? I simply want to load this order but get the warning that WARNING: C:\Users\Usuario\Documents\Corona Projects\BalloonTap\game.lua:259: audio.loadSound() failed to create sound 'audio/pop.wav´.
The directory of the audio of the audio file is C:\Users\Usuario\Documents\Corona Projects\BalloonTap\audio
The code is:
balloonPop = audio.loadSound( "audio/pop.wav")
Any possible fixes? I have tried different audio files and renaming file. I have tried putting it in the main folder and just saying balloonPop = audio.loadSound( "pop.wav") but that fails as well.
When sounds are not loading in Corona, it is a good idea to check that the file can be found:
local filename = "audio/pop.wav"
if system.pathForFile( filename , system.ResourceDirectory ) == nil then
print("WARNING: cannot find audio file "..filename )
end
The fact that the sound could not be created suggests, however, that this is not the problem. From the Audio Usage/Functions Guide:
Cross-platform .wav files must be 16-bit uncompressed.
Is your sound file in the correct format?

How to save just raw PCM to file with iOS SDK (Core Audio)?

I'm converting an MP3 file into raw PCM, and I need to save it as just raw PCM. (Note, am using Java/RoboVM to port to iOS.)
I'm using the coreaudio package, and the relevant part of my code looks like this:
// Define the output PCM format.
AudioStreamBasicDescription outputFormat = new AudioStreamBasicDescription();
outputFormat.setFormat(AudioFormat.LinearPCM);
outputFormat.setFormatFlags(AudioFormatFlags.Canonical);
outputFormat.setBitsPerChannel(16);
outputFormat.setChannelsPerFrame(1);
outputFormat.setFramesPerPacket(1);
outputFormat.setBytesPerFrame(2);
outputFormat.setBytesPerPacket(2);
outputFormat.setSampleRate(22050);
// ...
outputFile = ExtAudioFile.create(outputFileURL, AudioFileType.CAF, outputFormat, null, AudioFileFlags.EraseFile);
I then run through a loop, reading from the MP3 file and writing to the output file.
Upon importing this raw file into Audacity, I notice it always has a spike at the start, indicating that it's not actually a raw PCM file but instead is inside of a wrapper with a header (whether it be WAV or CAF headers, etc).
I understand I can just take the file and strip the header off and get the raw PCM data, but in terms of space/performance of this part of my app, I'd love if I can just keep it simple and save the raw PCM data as-is without a wrapper, but I don't know how to go about doing that.
The issue arises here:
outputFile = ExtAudioFile.create(outputFileURL, AudioFileType.CAF, outputFormat, null, AudioFileFlags.EraseFile);
There aren't many choices for AudioFileType, I've tried WAVE and CAF. Ideally there would be a PCM or RAW option but there's not. Is there a specific AudioFileType I should choose, or do I need to go about this another way?
The extended audio file services framework doesn't support a "raw" PCM format.
For an application to understand a PCM format it needs to know data stuff like:
How many channels are there
Are they interleaved or not
What is the sample rate
Is the data floating point or not
What is the bit depth
etc...
In fact, on iOS and OS X the AudioStreamBasicDescription is a struct which tells you what is required to interpret a PCM stream. For this reason, a "raw PCM" format doesn't really work, it needs at least some metadata. The closest formats to raw PCM are WAV, AIFF and CAF. If these don't serve your purposes you'll have to create a custom file format. But this doesn't need to be difficult.
The extended audio file services APIs are quite configurable. After opening an audio file to read (ExtAudioFileOpenURL) you can set various properties on the ExtAudioFileRef handle.
In your case consider setting kExtAudioFileProperty_ClientDataFormat. This property controls the format of the PCM data read from the file. As ExtAudioFileRead decodes the input file, it will convert the data it sends back to the format you specify. There are some limitations to this method. IIRC, it does not support doing sample rate conversion and things like that.
As you read the properly decoded data, you can then use something like NSOutputStream to write the "raw PCM" format of your choice directly to a file with no metadata at all.

How to read a frame from .mts video file with gstreamer 1.0 and process it by OpenCV

For the past two weeks I am trying to find a proper way to read frames from .mts video file and process them in OpenCV. When .mts file is in 25p (25 fps progressive) format VideoCapture of OpenCV works fine for seeking video frames but when it is in 50i (25 fps interlaced) format VideoCapture of OpenCV can not properly decode it frame by frame.
(e.g. in a sample scenario when I get frame #1 and then read frame #300 and later read frame #1, it returns a corrupted image different from my previous read of frame #1) (i am using OpenCV 2.4.6)
I decided to replace video decoder part of the program.
I tried FFmpegSource2 but the problem of proper frame seeking for .mts was not resolved (most of the time FFMS_GetFrame function returns same output for several consecutive frames for 50i .mts file).
I also tried DirectShow. But IsFormatSupported method of IMediaSeeking for TIME_FORMAT_FRAME does not return S_OK for 50i .mts video file and it only supports TIME_FORMAT_MEDIA_TIME for this kind of video file. I have not tried myself but a friend said even using TIME_FORMAT_MEDIA_TIME for frame seeking will result in the same problem as above and I may not be able to jump back and forward to individual frames and read their data.
Now I am going to try gstreamer. I found sample method for linking gstreamer and openCV in the following link:
Adding opencv processing to gstreamer application
When I try to compile it in gstreamer 1.0, I get the following error:
error C3861: 'gst_app_sink_pull_buffer': identifier not found
I have included gst/gst.h, gst/app/gstappsink.h, gst/app/gstappsrc.h
Looked at the following help link and there was not gst_app_sink_pull_buffer function there too.
http://gstreamer.freedesktop.org/data/doc/gstreamer/head/gst-plugins-base-libs/html/gst-plugins-base-libs-appsink.html
I am using gstreamer 1.0 (v1.2.0) from gstreamer.freedesktop.org
May be gstreamer SDK from www.gstreamer.com (based on gstreamer 0.1) work for that, but I have not tried it yet and prefer to use gstreamer from gstreamer.freedesktop.org
I don't know where gst_app_sink_pull_buffer is defined. Anybody knows how I can compile the sample method provided for gstreamer 0.1 in
Adding opencv processing to gstreamer application for gstreamer 1.0?
Thank you in advance.
UPDATE 1: I am new to gstreamer. Now I know that have to port the sample method of Adding opencv processing to gstreamer application from gstreamer 0.1 to gstreamer 1.0. I replaced gst_app_sink_pull_buffer function with gst_app_sink_pull_sample and gst_sample_get_buffer. Have to work more on the other parts of the code and see if can open a desired frame from 50i .mts video file and process it with OpenCV.
UPDATE 2: I found a very good example at
http://gstreamer.freedesktop.org/data/doc/gstreamer/head/manual/html/section-data-spoof.html#section-spoof-appsink
And I easily replaced the part which saves the snapshot using GTk with functions that load frame data buffer to an OpenCV Mat. This program works fine for many video file types and I can grab frames of the video file in OpenCV Mat. But when the input video file is a 50i .mts video file, it returns the following errors and I can not read the frame data:
No accelerated IMDCT transform found
0:00:00.405110839 4632 0B775380 ERROR libav :0:: get_buffer() failed (-1 2 00000000)
0:00:00.405740899 4632 0B775380 ERROR libav :0:: decode_slice_header error
0:00:00.406401077 4632 0B7756A0 ERROR libav :0:: Missing reference picture
0:00:00.406705867 4632 0B7756A0 ERROR libav :0:: Missing reference picture
0:00:00.416044436 4632 0B7759C0 ERROR libav :0:: Cannot combine reference and non-reference fields in the same frame
0:00:00.416813339 4632 0B7759C0 ERROR libav :0:: decode_slice_header error
0:00:00.417725301 4632 0B775CE0 ERROR libav :0:: Missing reference picture
The step by step debug shows that "No accelerated IMDCT transform found" appears after running
ret = gst_element_get_state( pipeline, NULL, NULL, 5 * GST_SECOND );
and google search shows that I can ignore it as a warning.
All of the other errors emerge just after running
g_signal_emit_by_name( sink, "pull-preroll", &sample, NULL );
I have no idea how to resolve this issue? I have already played this .mts file in another example using playbin and gstreamer can play this .mts video file well when I use playbin.

capture h264 to buffer with openCV

I retrieve an rtp H264 stream. I process like this :
- get Udp packet
- remove rtp header and parse packet to get image
- record/append image into a file
- open this file with opencv (bool VideoCapture::open(const string& filename))
and all is working fine!!
Now I want to skip the record in the file step and directly send image from udp process to opencv. But i don't know how initialise opencv with an input buffer. It only accept const string& filename.
Could somebody help me?
Thanks
if it's single images, you got in memory, imdecode should do the trick

The codec could not be accessed. (-66672)

I am trying to convert caf file to m4a file using AudioUnit. I have implemented the code to convert. When I tried to run the application, I am getting following error message;
couldn't set destination client format (-66672)
I got the sample code from following link;
http://developer.apple.com/library/ios/#samplecode/iPhoneExtAudioFileConvertTest/Introduction/Intro.html
CODE:size = sizeof(clientFormat);
XThrowIfError(ExtAudioFileSetProperty(sourceFile, kExtAudioFileProperty_ClientDataFormat, size, &clientFormat), "couldn't set source client format");
//UInt32 encoderSpecifier = kAudioFormatMPEG4AAC;
//XThrowIfError(AudioFormatGetPropertyInfo(kAudioFormatProperty_Encoders, sizeof(encoderSpecifier), &encoderSpecifier, &size), "AudioFormatGetPropertyInfo: couldn't get property info");
size = sizeof(clientFormat);
XThrowIfError(ExtAudioFileSetProperty(destinationFile, kExtAudioFileProperty_ClientDataFormat, size, &clientFormat), "couldn't set destination client format");
AudioConverterRef audioConverter;
size = sizeof(audioConverter);
XThrowIfError(ExtAudioFileGetProperty(destinationFile, kExtAudioFileProperty_AudioConverter, &size, &audioConverter), "Couldn't get Audio Converter!");
I am not getting the solution for it. I have tried like setting the properties to the output file. But I am getting the same issue.
Please help me to resolve it.
I encountered this one too - It's not well-documented, but the reason is probably that you have to set the audio category to one compatible with hardware encoding.
In particular, any audio session that provides mixing with other sounds on the device will stop the encoder from working.
I know that AVAudioSessionCategoryPlayAndRecord, AVAudioSessionCategorySoloAmbient and AVAudioSessionCategoryAudioProcessing work for sure (as long as you're not overriding the kAudioSessionProperty_OverrideCategoryMixWithOthers property).
I've actually assembled everything you need to encode any audio file to AAC into an asynchronous class: TPAACAudioConverter

Resources