OpenCV and iPhone - opencv

I am writing an application to create a movie file from a bunch of images on an iPhone. I am using OpenCv. I downloaded OpenCv static libraries for ARM(iPhone's native instruction architecture) and the libraries were generated just fine. There were no problems linking to them libraries.
As a first step, I was trying to create a .avi file using one image, to see if it works. But cvCreateVideoWriter always returns me a NULL value. I did some searching and I believe its due to the codec not being present. I am trying this on the iPhone simulator. This is what i do:
- (void)viewDidLoad {
[super viewDidLoad];
UIImage *anImage = [UIImage imageNamed:#"1.jpg"];
IplImage *img_color = [self CreateIplImageFromUIImage:anImage];
//The image gets created just fine
CvVideoWriter *writer =
cvCreateVideoWriter("out.avi",CV_FOURCC('P','I','M','1'),
25,cvSize(320,480),1);
//writer is always null
int result = cvWriteFrame(writer, img_color);
NSLog(#"\n%d",result);
//hence this is also 0 all the time
cvReleaseVideoWriter(&writer);
}
I am not sure about the second parameter. What sort of codec or what exactly does it do...
I am a n00B at this. Any suggestions?

On *nix flavors, OpenCV uses ffmpeg under the covers to encode video files, so you need to make sure your static libraries are built with ffmpeg support. The second parameter, CV_FOURCC('P','I','M','1'), is the FOURCC code describing the video format/codec you are requesting, in this case the MPEG1 codec. Check out fourcc.org for a complete listing (not all of which work in ffmpeg).

Related

iOS ffmpeg how to run a command to trim remote url video?

I was initially using the AVFoundation libraries to trim video but it has a limitation that it can't do it for remote URLs and only works for local URLs.
So after further research I found ffmpeg library which can be included in a Xcode project for iOS.
I have tested the following commands to trim a remote video on command line:
ffmpeg -y -ss 00:00:01.000 -i "http://i.imgur.com/gQghRNd.mp4" -t 00:00:02.000 -async 1 cut.mp4
which will trim the .mp4 from 1 second to 3 second mark. This works perfectly via command line on my mac.
I have been successfully able to compile and include ffmpeg library into a xcode project but not sure how to proceed further.
Now I am trying to figure out how to run this command on an iOS app using the ffmpeg libraries. How can I do this?
If you can point me to some helpful direction, I would really appreciate it! If I can get it resolved using your solution, I will award a bounty (in 2 days when it gives me the option).
I have some idea about this. However, I have very limited exp on iOS and not sure whether my thought is the best way:
As far as I know, generally it is impossible to run the cmd tools on iOS. Maybe you have to write some code linked to ffmpeg libs.
Here's all the jobs needed to do:
Open input file and init some ffmpeg context.
Get the video stream and seek to the timestamp you want. This may be complicated. See ffmpeg tutorial for some help, or check this to seek precisely and dealing with the troublesome key frames.
Decode some frames. Until the frame match the end timestamp.
Meanwhile with above, encode the frames to a new file as output.
The examples in ffmpeg source is very good to learn how to do this.
Some maybe useful codes:
av_register_all();
avformat_network_init();
AVFormatContext* fmt_ctx;
avformat_open_input(&fmt_ctx, "http://i.imgur.com/gQghRNd.mp4", NULL, NULL);
avformat_find_stream_info(fmt_ctx, NULL);
AVCodec* dec;
int video_stream_index = av_find_best_stream(fmt_ctx, AVMEDIA_TYPE_VIDEO, -1, -1, &dec, 0);
AVCodecContext* dec_ctx = avcodec_alloc_context3(NULL);
avcodec_parameters_to_context(dec_ctx, fmt_ctx->streams[video_stream_index]->codecpar)
// If there is audio you need, it should be decoded/encoded too.
avcodec_open2(dec_ctx, dec, NULL);
// decode initiation done
av_seek_frame(fmt_ctx, video_stream_index, frame_target, AVSEEK_FLAG_FRAME);
// or av_seek_frame(fmt_ctx, video_stream_index, timestamp_target, AVSEEK_FLAG_ANY)
// and for most time, maybe you need AVSEEK_FLAG_BACKWARD, and skipping some following frames too.
AVPacket packet;
AVFrame* frame = av_frame_alloc();
int got_frame, frame_decoded;
while (av_read_frame(fmt_ctx, &packet) >= 0 && frame_decoded < second_needed * fps) {
if (packet.stream_index == video_stream_index) {
got_frame = 0;
ret = avcodec_decode_video2(dec_ctx, frame, &got_frame, &packet);
// This is old ffmpeg decode/encode API, will be deprecated later, but still working now.
if (got_frame) {
// encode frame here
}
}
}

CVPixelBufferRef as a GPU Texture

I have one (or possibly two) CVPixelBufferRef objects I am processing on the CPU, and then placing the results onto a final CVPixelBufferRef. I would like to do this processing on the GPU using GLSL instead because the CPU can barely keep up (these are frames of live video). I know this is possible "directly" (ie writing my own open gl code), but from the (absolutely impenetrable) sample code I've looked at it's an insane amount of work.
Two options seem to be:
1) GPUImage: This is an awesome library, but I'm a little unclear if I can do what I want easily. First thing I tried was requesting OpenGLES compatible pixel buffers using this code:
#{ (NSString *)kCVPixelBufferPixelFormatTypeKey : [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA],
(NSString*)kCVPixelBufferOpenGLESCompatibilityKey : [NSNumber numberWithBool:YES]};
Then transferring data from the CVPixelBufferRef to GPUImageRawDataInput as follows:
// setup:
_foreground = [[GPUImageRawDataInput alloc] initWithBytes:nil size:CGSizeMake(0,0)pixelFormat:GPUPixelFormatBGRA type:GPUPixelTypeUByte];
// call for each frame:
[_foreground updateDataFromBytes:CVPixelBufferGetBaseAddress(foregroundPixelBuffer)
size:CGSizeMake(CVPixelBufferGetWidth(foregroundPixelBuffer), CVPixelBufferGetHeight(foregroundPixelBuffer))];
However, my CPU usage goes from 7% to 27% on an iPhone 5S just with that line (no processing or anything). This suggests there's some copying going on on the CPU, or something else is wrong. Am I missing something?
2) OpenFrameworks: OF is commonly used for this type of thing, and OF projects can be easily setup to use GLSL. However, two questions remain about this solution: 1. can I use openframeworks as a library, or do I have to rejigger my whole app just to use the OpenGL features? I don't see any tutorials or docs that show how I might do this without actually starting from scratch and creating an OF app. 2. is it possible to use CVPixelBufferRef as a texture.
I am targeting iOS 7+.
I was able to get this to work using the GPUImageMovie class. If you look inside this class, you'll see that there's a private method called:
- (void)processMovieFrame:(CVPixelBufferRef)movieFrame withSampleTime:(CMTime)currentSampleTime
This method takes a CVPixelBufferRef as input.
To access this method, declare a class extension that exposes it inside your class
#interface GPUImageMovie ()
-(void) processMovieFrame:(CVPixelBufferRef)movieFrame withSampleTime:(CMTime)currentSampleTime;
#end
Then initialize the class, set up the filter, and pass it your video frame:
GPUImageMovie *gpuMovie = [[GPUImageMovie alloc] initWithAsset:nil]; // <- call initWithAsset even though there's no asset
// to initialize internal data structures
// connect filters...
// Call the method we exposed
[gpuMovie processMovieFrame:myCVPixelBufferRef withSampleTime:kCMTimeZero];
One thing: you need to request your pixel buffers with kCVPixelFormatType_420YpCbCr8BiPlanarFullRange in order to match what the library expects.

How to use AVAssetWriter instead of AVAssetExportSession to re-encode existing video

I'm trying to re-encode videos on an iPad which were recorded on that device but with the "wrong" orientation. This is because when the file is converted to an MP4 file and uploaded to a web server for use with the "video" HTML5 tag, only Safari seems to render the video with the correct orientation.
Basically, I've managed to implement what I wanted by using a AVMutableVideoCompositionLayerInstruction, and then using AVAssetExportSession to create the resultant video with audio. However, the problem is that the file sizes jump up considerably after doing this, eg correcting an original file of 4.1MB results in a final file size of 18.5MB! All I've done is rotate the video through 180 degrees!! Incidentally, the video instance that I'm trying to process was originally created by the UIImagePicker during "compression" using videoQuality = UIImagePickerControllerQualityType640x480, which actually results in videos of 568 x 320 on an iPad mini.
I experimented with the various presetName settings on AVAssetExportSession but I couldn't get the desired result. The closest I got filesize-wise was 4.1MB (ie exactly the same as source!) by using AVAssetExportPresetMediumQuality, BUT this also reduced the dimensions of the resultant video to 480 x 272 instead of the 568 x 320 that I had explicitly set.
So, this led me to look into other options, and hence using AVAssetWriter instead. The problem is, I can't get any code that I have found to work! I tried the code found on this SO post (Video Encoding using AVAssetWriter - CRASHES), but can't get it to work. For a start, I get a compilation error for this line:
NSDictionary *videoOptions = [NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange] forKey:(id)kCVPixelBufferPixelFormatTypeKey];
The resultant compilation error being:
Undefined symbols for architecture armv7: "_kCVPixelBufferPixelFormatTypeKey"
This aside, I tried passsing in nil for the AVAssetReaderTrackOutput's outputSettings, which should be OK according to header info:
A value of nil for outputSettings configures the output to vend samples in their original format as stored by the specified track.
However, I then get a crash happening at this line:
BOOL result = [videoWriterInput appendSampleBuffer:sampleBuffer];
In short, I've not been able to get any code to work with AVAssetWriter, so I REALLY need some help here. Are there any other ways to achieve my desired results? Incidentally, I'm using Xcode 4.6 and I'm targeting everything from iOS5 upwards, using ARC.
I have solved similar problems related to your questions. This might help someone who has similar problems:
Assuming writerInput is your object instance of AVAssetWriterInput and assetTrack is the instance of your AVAssetTrack, then your transform problem is solved by simply:
writerInput.transform = assetTrack.preferredTransform;
You have to release sampleBuffer after appending your sample buffer, so you would have something like:
if (sampleBuffer = [asset_reader_output copyNextSampleBuffer]) {
BOOL result = [writerInput appendSampleBuffer:sampleBuffer];
CFRelease(sampleBuffer); // Release sampleBuffer!
}
The compilation error was caused by me not including the CoreVideo.framework. As soon as I had included that and imported it, I could get the code to compile. Also, the code would work and generate a resultant video, but I uncovered 2 new problems:
I can't get the transform to work using the transform property on AVAssetWriterInput. This means that I'm stuck with using a AVMutableVideoCompositionInstruction and AVAssetExportSession for the transformation.
If I use AVAssetWriter to just handle compression (seeing as I don't have many options with AVAssetExportSession), I still have a bad memory leak. I've tried everything I can think of, starting with the solution in this link ( Help Fix Memory Leak release ) and also with #autorelease blocks at key points. But it seems that the following line will cause a leak, no matter what I try:
CMSampleBufferRef sampleBuffer = [asset_reader_output copyNextSampleBuffer];
I could really do with some help.

alternative to cvWaitKey() function in iOS

ok, what i am trying to do is retrieve a frame from an existing video file, do some work on the frame and then save it to a new file,
what actually happens is that it writes some frames and then crashes as the code is quite fast,
if i don't put cvWaitKey() i get the same error i get when writing video frames with AVFoundation Library without using
AVAssetWriterInput.readyForMoreMediaData
OpenCV video writer is implemented using AVFoundation classes but we lose access to
AVAssetWriterInput.readyForMoreMediaData
or am i missing something ?
here is the code similar to what i'm trying to do,
while (grabResult&&frameResult) {
grabResult = cvGrabFrame(capture); // capture a frame
if(grabResult){
img = cvRetrieveFrame(capture, 0); // retrieve the captured frame
cvFlip(img,NULL,0); // edit img
frameResult = cvWriteFrame(writer,img); // add the frame to the file
cvWaitKey(-1); or anything that helps to finish adding the previous frame
}
}
I am trying to convert a video file using OpenCV (without displaying)
in my iPhone/iPad app, everything works except cvWaitKey() function
and I get this error:
OpenCV Error: Unspecified error (The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Carbon support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script) in cvWaitKey,
Without this function frames are dropped as there's no way to know if
the video writer is ready, is there an alternative to my problem?
I am using OpenCV 2.4.2 and I get same error with the latest
precompiled version of OpenCV.
Repaint the UIImageView:
[[NSRunLoop currentRunLoop]runUntilDate:[NSDate dateWithTimeIntervalSinceNow:0.0f]];
followed by your imshow or cvShowImage
I don't know anything about iOS development, but did you try calling the C++ interface method waitKey()?

iOS Barcode Efficient Scanner

I am working on an iOS application and I am interested to embed some already developed and tested barcode scanner into it. I tried zxing but this never extract the numbers out of it. My goal is to scan this image and get 24 characters out of it.
If there is not a already developed thing, I would like to build one myself. How should I start in order to create it from scratch for 1D barcodes initially?
Using zxing I am using this piece of code now.
- (IBAction)scanPressed:(id)sender {
ZXingWidgetController *widController = [[ZXingWidgetController alloc] initWithDelegate:self showCancel:YES OneDMode:YES];
zxing::oned::Code128Reader *code128Reader = new zxing::oned::Code128Reader();
MultiFormatOneDReader *mfReader = [[MultiFormatOneDReader alloc] initWithReader:code128Reader];
NSSet *readers = [[NSSet alloc ] initWithObjects:mfReader,nil];
[mfReader release];
widController.readers = readers;
[readers release];
NSBundle *mainBundle = [NSBundle mainBundle];
widController.soundToPlay =
[NSURL fileURLWithPath:[mainBundle pathForResource:#"beep-beep" ofType:#"aiff"] isDirectory:NO];
[self presentModalViewController:widController animated:YES];
[widController release];
}
I tried ZXing SDK first but it didn't work. I then tried ZBar SDK which worked just amazingly great.
If in future someone would need the same thing I am going to post the link which helped me make it work.
http://zbar.sourceforge.net/iphone/sdkdoc/tutorial.html
If this is a "Code 128" barcode, pay attention of the variant of the code.
For example if using zxing you successfully scanned the code but the decoded values do not match the numbers under the barcode, that's probably because zxing decoded the barcode bytes successfully but didn't render the result using the expected alphabet.
Code 128 exists in three variants:
Code 128 A which uses the alphabet "!#$%&'()*+.-/0123456789:;<=>?# ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_
Code 128 A which uses the wider alphabet "!#$%&'()*+.-/0123456789:;<=>?# ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`` abcdefghijklmnopqrstuvwxyz{|}~
Code 128 C which encodes numbers 0-9
Maybe zxing returns the 128-A or 128-B interpretation of the barcode and not the 128-C variant? In such cases it would mean that the scanning works correctly but you may force the barcode format so it can interpret it right.
Maybe I'm wrong about this the zxing code bases for iphone only allows for QR codes.
From the website site http://code.google.com/p/zxing/
There are also additional modules which are contributed and/or
intermittently maintained:
zxing.appspot.com: The source behind our web-based barcode generator
csharp: Partial C# port
cpp: Partial C++ port
**iphone: iPhone client + port to Objective C / C++ (QR code only)**
jruby: Ruby wrapper
actionscript: partial port to Actionscript

Resources