iOS ffmpeg how to run a command to trim remote url video? - ios

I was initially using the AVFoundation libraries to trim video but it has a limitation that it can't do it for remote URLs and only works for local URLs.
So after further research I found ffmpeg library which can be included in a Xcode project for iOS.
I have tested the following commands to trim a remote video on command line:
ffmpeg -y -ss 00:00:01.000 -i "http://i.imgur.com/gQghRNd.mp4" -t 00:00:02.000 -async 1 cut.mp4
which will trim the .mp4 from 1 second to 3 second mark. This works perfectly via command line on my mac.
I have been successfully able to compile and include ffmpeg library into a xcode project but not sure how to proceed further.
Now I am trying to figure out how to run this command on an iOS app using the ffmpeg libraries. How can I do this?
If you can point me to some helpful direction, I would really appreciate it! If I can get it resolved using your solution, I will award a bounty (in 2 days when it gives me the option).

I have some idea about this. However, I have very limited exp on iOS and not sure whether my thought is the best way:
As far as I know, generally it is impossible to run the cmd tools on iOS. Maybe you have to write some code linked to ffmpeg libs.
Here's all the jobs needed to do:
Open input file and init some ffmpeg context.
Get the video stream and seek to the timestamp you want. This may be complicated. See ffmpeg tutorial for some help, or check this to seek precisely and dealing with the troublesome key frames.
Decode some frames. Until the frame match the end timestamp.
Meanwhile with above, encode the frames to a new file as output.
The examples in ffmpeg source is very good to learn how to do this.
Some maybe useful codes:
av_register_all();
avformat_network_init();
AVFormatContext* fmt_ctx;
avformat_open_input(&fmt_ctx, "http://i.imgur.com/gQghRNd.mp4", NULL, NULL);
avformat_find_stream_info(fmt_ctx, NULL);
AVCodec* dec;
int video_stream_index = av_find_best_stream(fmt_ctx, AVMEDIA_TYPE_VIDEO, -1, -1, &dec, 0);
AVCodecContext* dec_ctx = avcodec_alloc_context3(NULL);
avcodec_parameters_to_context(dec_ctx, fmt_ctx->streams[video_stream_index]->codecpar)
// If there is audio you need, it should be decoded/encoded too.
avcodec_open2(dec_ctx, dec, NULL);
// decode initiation done
av_seek_frame(fmt_ctx, video_stream_index, frame_target, AVSEEK_FLAG_FRAME);
// or av_seek_frame(fmt_ctx, video_stream_index, timestamp_target, AVSEEK_FLAG_ANY)
// and for most time, maybe you need AVSEEK_FLAG_BACKWARD, and skipping some following frames too.
AVPacket packet;
AVFrame* frame = av_frame_alloc();
int got_frame, frame_decoded;
while (av_read_frame(fmt_ctx, &packet) >= 0 && frame_decoded < second_needed * fps) {
if (packet.stream_index == video_stream_index) {
got_frame = 0;
ret = avcodec_decode_video2(dec_ctx, frame, &got_frame, &packet);
// This is old ffmpeg decode/encode API, will be deprecated later, but still working now.
if (got_frame) {
// encode frame here
}
}
}

Related

Force GStreamer for IOS to use ffmpeg library outside of the framework

Gstreamer 1.0 for IOS is delivered in a static framework, the source to build the framework is around 1.2g , this framework is huge and tries to provide for any decoding server scenario you may have. Trouble is it tries to do to much and IMHO not enough thought was put into the IOS port.
Here's the problem we have an application that uses the GSTreamer avdec_h264 plugin for displaying an RTP over UDP stream . This works rather well. recently we were required to do some special recording functions so we introduced an api that had its own version of ffmpeg. Gstreamer has Libav compiled into the framework. When we place our api into the application with the gst_IOS_RESTICTED_PLUGINS disabled the code runs fine when we introduce the GStreamer.framework into the application code similar to that shown below fails with a protocol not found error.
The problem is that the internal version of libav seems to disable all the protocols that ffmpeg supplies. because GSTreamer uses its own custom AVIO callback based on ffmpeg pipe protocol.
According to Gstreamer support that has been somewhat helpful
) Add a new recipe with the libav version you want to use and disable the build of the internal libav in gst-libav-1.0 with:
configure_options = '--with-system-libav'
You might need to comment out this part to prevent libav being packaged in the framewiork or make sure that your libav recipe creates these files in the correct place to include them in the framework:
42 for f in ['libavcodec', 'libavformat', 'libavutil', 'libswscale']:
43 for ext in ['.a', '.la']:
44 path = os.path.join('lib', f + ext)
45 self.files_plugins_codecs_restricted_devel.append(path)
2) Update libav submodule gst-libav to use the correct version you need.
https://bugs.freedesktop.org/show_bug.cgi?id=77399
The first method didn't work , the recipe kept getting overwritten even after applying a patch for a bug fix that was made as result of this bug report.
And I have no idea how to do the second method. Which is what I'd like some help with.
Has anyone with GStreamer 1.0 for iOS
1) Built the get-libav plugin against an external to to the framework set of ffmpeg static libs (.a)
2) built the internal libav to allow for RTP , UDP and TCP protocols, or written a custom AVIO callback using the FFPipe protocol.
3) just managed to somehow get the below code working with GStreamer.
I don't ask many questions, I've kind of implemented all kinds of encoders/decoders using ffmpeg , lib555 and a few hardware decoders. But this GStreamer issue is causing me more sleepless nights than I've had in a long time.
AVFormatContext * avctx;
avctx = avformat_alloc_context();
av_register_all();
avformat_network_init();
avcodec_register_all();
avdevice_register_all();
// Set the RTSP Options
AVDictionary *opts = 0;
av_dict_set(&opts, "rtsp_transport", "udp", 0);
int err = 0;
err = avformat_open_input(&avctx, "rtsp://184.72.239.149/vod/mp4:BigBuckBunny_115k.mov", NULL, &opts);
av_dict_free(&opts);
if (err) {
NSLog(#"Error: Could not open stream: %d", err);
char errbuf[400];
av_strerror(err,errbuf,400);
NSLog(#"%s failed with error %s","avformat_open_input",errbuf);
}
else {
NSLog(#"Opened stream");
}
err = avformat_find_stream_info(avctx, NULL);
if( err < 0)
{
char errbuf[400];
av_strerror(err,errbuf,400);
NSLog(#"%s failed with error %s","avformat_find_stream_info",errbuf);
return ;
}else {
NSLog(#"found stream info");
}
}

How can I make an audible Beep in Dart?

I have searched on [DartLang and Beep] and have found only various HTML5 solutions that require a sound file. I would like to do a as basic and as universal a "Bell" sound as possible without using a sound file. (I'm using Ubuntu and there is a System beep function that reports the following when I call it with -h:
Usage:
beep [-f freq] [-l length] [-r reps] [-d delay] [-D delay] [-s] [-c] [--verbose | --debug] [-e device]
However, again, I just want to do this in as simple and universal a way as possible.
There is also this clue:
7 00/07 07 07 BEL (Ctrl-G) BELL (Beep)
...but nothing I could think of doing with the print() function would cause a beep.
Thanks in advance!
I don't know why are you doing this, but it really depends on whether you want this to work on the browser or on the console.
If you want to do this on the console, try this:
main() {
print(new String.fromCharCodes([0x07]));
}
It beeps for me at least on Windows. It should work if the terminal supports it (and it's not disabled by the user and so forth).
If you want to do this on the browser, you should play a sound file.
Here's a free beep sound: http://www.freesound.org/people/SpeedY/sounds/3062/
A very simple example on the browser:
new AudioElement("path/to/beep.wav")
..autoplay = true
..load();
You can use the Audio API to generate tones. For example, the following Dart code will generate a beep lasting 50 ms when you hit a key.
import 'dart:html';
import 'dart:web_audio';
import 'dart:async';
void main() {
final num LENGTH = 50;
var ac = new AudioContext();
window.onKeyUp.listen((KeyboardEvent ke) {
print("press a key");
OscillatorNode oscillator = ac.createOscillator();
oscillator
..type = "sine"
..frequency.value = 1000
..connectNode(ac.destination, 0, 0)
..start(0);
var timer = new Timer(const Duration(milliseconds: LENGTH), () {
oscillator.disconnect(0);
});
});
}
Possibly, there is a better way to terminate the generated tone than to set a timeout - perhaps by using an event listener (I'm still pretty new to the API; hopefully someone who knows more can edit the above code) - but the result is an audible beep, without any sound files... on a browser that supports the Audio API, that is.
If you are writing a console application, you can print '\a' which is an ASCII bell character.
This might not play a sound on all terminals. For instance, GNU screen may instead make the screen colors invert momentarily and print the message "Wuff, Wuff!!" instead of playing a sound. Various terminal emulators will allow you to disable ASCII bells, and the specific sound played might be modified in system settings.
Also, this won't work in the browser. For that, you'll have to use a sound file.

alternative to cvWaitKey() function in iOS

ok, what i am trying to do is retrieve a frame from an existing video file, do some work on the frame and then save it to a new file,
what actually happens is that it writes some frames and then crashes as the code is quite fast,
if i don't put cvWaitKey() i get the same error i get when writing video frames with AVFoundation Library without using
AVAssetWriterInput.readyForMoreMediaData
OpenCV video writer is implemented using AVFoundation classes but we lose access to
AVAssetWriterInput.readyForMoreMediaData
or am i missing something ?
here is the code similar to what i'm trying to do,
while (grabResult&&frameResult) {
grabResult = cvGrabFrame(capture); // capture a frame
if(grabResult){
img = cvRetrieveFrame(capture, 0); // retrieve the captured frame
cvFlip(img,NULL,0); // edit img
frameResult = cvWriteFrame(writer,img); // add the frame to the file
cvWaitKey(-1); or anything that helps to finish adding the previous frame
}
}
I am trying to convert a video file using OpenCV (without displaying)
in my iPhone/iPad app, everything works except cvWaitKey() function
and I get this error:
OpenCV Error: Unspecified error (The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Carbon support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script) in cvWaitKey,
Without this function frames are dropped as there's no way to know if
the video writer is ready, is there an alternative to my problem?
I am using OpenCV 2.4.2 and I get same error with the latest
precompiled version of OpenCV.
Repaint the UIImageView:
[[NSRunLoop currentRunLoop]runUntilDate:[NSDate dateWithTimeIntervalSinceNow:0.0f]];
followed by your imshow or cvShowImage
I don't know anything about iOS development, but did you try calling the C++ interface method waitKey()?

Can't access webcam with OpenCV

I'm using OpenCV 2.2 with visual studio 2010 on a win 7 64 bit pc.
I'm able to display pictures and play AVI files through OpenCV as given in the book "Learning OpenCV" but I'm not able to capture webcam images. Even the samples given along with the OpenCV files cant access the webcam.
I get asked for " video source -> capture source" and there are two options: HP webcam Splitter and HP webcam. If I select HP webcam the window closes immediately without displaying any error. (i think any error message is too fast to be seen before it closes). If I select HP Webcam splitter then the new window, where the webcam images(video) are supposed to come, is filled with uniform gray. The webcam LED is on but no video is seen. My webcam works fine with flash (www.testmycam.com) and with DirectShow http://www.codeproject.com/KB/audio-video/WebcamUsingDirectShowNET.aspx
I did try getting some error message by using this:
#include "cv.h"
#include "highgui.h"
#include <iostream>
using namespace cv;
using namespace std;
int main(int, char**)
{
VideoCapture cap("0"); // open the default camera
if(!cap.isOpened()) // check if we succeeded
{
cout << "Error opening camera!";
getchar();
return -1;
}
Mat edges;
namedWindow("edges",1);
for(;;)
{
Mat frame;
cap >> frame; // get a new frame from camera
cvtColor(frame, edges, CV_BGR2GRAY);
GaussianBlur(edges, edges, Size(7,7), 1.5, 1.5);
Canny(edges, edges, 0, 30, 3);
imshow("edges", edges);
if(waitKey(30) >= 0) break;
}
// the camera will be deinitialized automatically in VideoCapture destructor
return 0;
}
And the error message I got was:
warning: Error opening file (C:\Users\vp\work\ocv\opencv\modules\highgui\src\cap
_ffmpeg.cpp:454)
Error opening camera!
I don't know what this "cap_ffmpeg.cpp" is and I don't know if this is any issue with the nosy "HP Media Smart" stuff.
Any help will be greatly appreciated.
I had the same issue on Windows 7 64-bit. I had to recompile opencv_highgui changing the "Preprocesser Definitions" in the C/C++ panel of the properties page to include:
HAVE_VIDEOINPUT
HAVE_DSHOW
Hope this helps
The cap_ffmpeg.cpp is the source file which uses ffmpeg to perform capturing of the device. If the default example given from OpenCV doesn't work with your webcam, you are out of luck. I suggest you buy another one that is supported.
Recently I have installed OpenCV 2.2 and NetBeans 6.9.1. I had a problem with camera capture, the image in the window was black but the program runs perfectly, without errors. I had to run NetBeans as admin user to fix this problem.
I hope this can help you all.
I just switched to OpenCV 2.2 and am having essentially the same problem but a 32 bit compture running Vista. The webcam would start but I'd get an error message setting the width property for the camera. If I specifically request the DirectShow camera, the cvCreateCameraCapture would fail.
What I think is going on is that the distribution version of HighGUI was build excluding the DirectShow camera. The favored Windows camera on OpenCV used to be Video For Windows, VFW but that has been deprecated since Windows Vista came out and has created all sorts of problems. Why they don't just include it, I don't know. Check the source file cap.cpp
My next step is to rebuild HighGUI myself and make sure the flag HAVE_DSHOW is set. I seem to remember having the same problem with the last version of OpenCV I've been using until I rebuilt it making sure the DirectShow version was enabled.
I experienced the same problem. My Vaio Webcam LED is on but no image on the screen.
Then I tried to export the first frame to a JPEG file and its working. Then I tried to insert a delay of 33ms before capture any frame, this time it works like a charm. Hope this'll help.
Here's an article I wrote some time back. It uses the videoInput library to get input from webcams. It uses DirectX, so it works with almost every webcam out there. Capturing images with DirectX
Once you create the cv::VideoCapture you should give an integer not a string (since string implies the input is a file).
To open the default camera, open the stream with
cv::VideoCapture capture(0);
and it will work fine.
CMAKE GUI, MSVC++10E, Vista 32bit, OpenCV2.2
It looks like HAVE_VIDEOINPUT/WITH_VIDEOINPUT option doesn't work.
However adding: /D HAVE_DSHOW /D HAVE_VIDEOINPUT to CMAKE_CXX_FLAGS, and CMAKE_C_FLAGS did the trick for me (there will be warns due to macro redefinitions).

OpenCV and iPhone

I am writing an application to create a movie file from a bunch of images on an iPhone. I am using OpenCv. I downloaded OpenCv static libraries for ARM(iPhone's native instruction architecture) and the libraries were generated just fine. There were no problems linking to them libraries.
As a first step, I was trying to create a .avi file using one image, to see if it works. But cvCreateVideoWriter always returns me a NULL value. I did some searching and I believe its due to the codec not being present. I am trying this on the iPhone simulator. This is what i do:
- (void)viewDidLoad {
[super viewDidLoad];
UIImage *anImage = [UIImage imageNamed:#"1.jpg"];
IplImage *img_color = [self CreateIplImageFromUIImage:anImage];
//The image gets created just fine
CvVideoWriter *writer =
cvCreateVideoWriter("out.avi",CV_FOURCC('P','I','M','1'),
25,cvSize(320,480),1);
//writer is always null
int result = cvWriteFrame(writer, img_color);
NSLog(#"\n%d",result);
//hence this is also 0 all the time
cvReleaseVideoWriter(&writer);
}
I am not sure about the second parameter. What sort of codec or what exactly does it do...
I am a n00B at this. Any suggestions?
On *nix flavors, OpenCV uses ffmpeg under the covers to encode video files, so you need to make sure your static libraries are built with ffmpeg support. The second parameter, CV_FOURCC('P','I','M','1'), is the FOURCC code describing the video format/codec you are requesting, in this case the MPEG1 codec. Check out fourcc.org for a complete listing (not all of which work in ffmpeg).

Resources