Does gstreamer for ios currently support displaying video. I'm following the tutorial which calls for creating a pipeline
gst_parse_launch("videotestsrc ! warptv ! videoconvert ! autovideosink", &error);"
and then connecting the video overlay.
video_sink = gst_bin_get_by_interface(GST_BIN(pipeline), GST_TYPE_VIDEO_OVERLAY);
Howerver, video_sink is always nil. If I change the pipeline to just playbin that works, but playbin is for playing from a URI, but I need to construct a full gstreamer video pipeline.
I also can't find any video sinks other than autovideosink. Is displaying a gstreamer video pipeline currently supported for ios?
This is on ios 7.1 with gstreamer 1.2.3.
With some help from the mailing list I have got test video displaying. I put up my working version of the ios video tutorial app.
The short answer is that gstreamer 1.2.3 does have support for video displaying using eglglessink. However, you need to modify the #defines in gst_ios_init.h to make sure eglglessink is included. You also need to use a GLKView to provide GL primitives and the video_overlay methods to set this up.
I found it difficult to discover this from the documentation so hopefully some others may find the tutorial useful.
Related
As it's mentioned in Release highlights, OpenCV (4.5.5) now have Audio support in videoio module. However, there's no documentation related on this topic.
I've tried a few things on my own like:
cv::VideoCapture cap(fileName,cv::CAP_MSMF);
However, no results so far.
How can I activate Audio Support? Am I missing something?
(Does not work neither for camera nor video files)
Additionally, I don't use pre-built binaries but, tried with pre-built ones(for Windows) and it didn't work neither.
As far as I see, this question does not make any sense unless they implement an interface. That's why they have no documentation about this topic. Hope they'll bring that feature with 4.5.6.
So I am running VLC on a Pi 4 and I have installed an extension to VLC that shows the elapsed seconds for the video. However, when I use python-vlc to launch VLC it does not enable the extension. Is there a way to do this using python-vlc?
I think you're confusing VLC and LibVLC? By "extension", I believe you mean VLC app add-ons. I don't think you can easily have these running when using standard libvlc builds.
However, there is a way to achieve what you want using the LibVLC APIs. Look into the marquee APIs, which is a video filter that allows you to display custom text at custom locations on the video.
Docs: https://www.videolan.org/developers/vlc/doc/doxygen/html/group__libvlc__video.html#ga53b271a8a125a351142a32447e243a96
I'm attempting to update a cordova app to read CODABAR format barcodes.
The barcode scanning plugin in use on iOS relies on the AV Foundation framework to set up an
AVCaptureSession to
activate the camera and intercept image frames.
Most of the cordova plugins & iOS tutorials around the web use this method, and attach a
AVCaptureMetadataOutput instance to specify which barcode formats we're interested in.
eg.
outputItems = [[AVCaptureMetadataOutput alloc] init];
[outputItems setMetadataObjectsDelegate:self queue:dispatch_get_main_queue()];
[captureSession addOutput:outputItems];
outputItems.metadataObjectTypes = [outputItems availableMetadataObjectTypes];
Unfortunately, CODABAR is not one of the supported formats.
Once the plugin is sent the frames, it's using ZXing to process the image. ZXing supports all the formats I want, but since AVCaptureMetadataOutput doesn't allow you
to specify CODABAR, my plugin never receives the images.
Is there an alternative to using an AVCaptureSession to process frames on the camera?
Am I missing a way to force the frames to be sent through despite the "unblessed" barcode format?
Ah. Found it I think.
The codebase still has references to the zxing calls, but swapped out using it without removing the old code, which led me down the wrong path.
Before it changed, it was using AVCaptureVideoDataOutput to process the frames itself.
Then it changed to using AVCaptureMetadataOutput instead, delegating the image processing to AV Foundation instead of zxing.
Looks like to add codabar support, I may need to reverse this, since AV Foundation doesn't do codabar.
UPDATE: About a year old now, but to tidy off this old question, I went ahead with the solution I suggested above since there didn't appear to be anything other way for this on iOS. Here's part of my comment on the plugin's github issue:
It turns out that:
Old versions of this plugin, in iOS, used an (ancient) snapshot of the ZXing c++ port. That library didn't support Codabar at the time.
At some point, the iOS version switched to using the iOS AV Foundation framework instead, delegating the barcode decoding to AVCaptureMetadataOutput. This framework doesn't support codabar either. This switch was necessary due to memory leaks found in the old c++ zxing approach, discovered when iOS 10 landed on everyone.
I've attempted to integrate this plugin with a more recent Objective C port of ZXing. This does support codabar, along with a few extra formats currently missing from the iOS version of this plugin.
My attempt is over here:
https://github.com/otherchirps/phonegap-plugin-barcodescanner
My iOS project is now scanning codabar barcodes using this version of the plugin.
This was based on the earlier efforts to do the same found here:
https://github.com/dually8/BarcodeScanner
We're using the Easy Movie Texture asset from the asset store and we are trying to play embedded mp4 files on an iPhone 7 device. It works fine with streaming URLs but once I tried to actually Load() an mp4 file it response very unhelpfully:
[prepareAsset]Error: Item cannot be played
Unknown error 0
MediaPlayerCtrl:OnError(MEDIAPLAYER_ERROR, MEDIAPLAYER_ERROR)
Is there any special gotchas anyone has seen with the difference between playing in editor and on an iOS device?
The issue ended up being the bitrate. Once we encoded the videos with Adobe instead of ffmpeg, it the videos seemed to work fine.
I have the same issue.
The error is printed when asset.playable is false.
It could be an url issue or an unsupported format..
My guess is that the resolution is too high
iOS: General devices support up to 1920 * 1080.
The latest device is support up to 2560 * 1440.
iPhone 6s Plus supports up to 4k.
https://www.assetstore.unity3d.com/en/#!/content/10032
Edit: Tested. resolution is indeed my issue here.
I've never used that specific asset pack, but you may want to take a look at the Unity Documentation on Movie Textures, as it may still be relevant even with the asset pack you are using.
https://docs.unity3d.com/Manual/class-MovieTexture.html
Maybe it's worth making sure your video file(s) meet the requirements mentioned in the Unity Documentation and trying to see if that remedies your issue.
I've hit a roadblock with using GPUImage. I'm trying to apply a filter (SepiaFilter or OpacityFilter) on a prerecorded video. What I'm expecting to see is the video played back with the filter applied to it. I followed the SimpleFileVideoFilter example for my code. What I ended up with is a video that is unplayable by Quicktime (m4v extension) and the live preview of the rendering all skewed. I thought it was my code at first so I ran the example app from the examples directory and lo and behold I got the same issue. Is the library broken? I just refreshed from master out of GitHub.
Thanks!
Here's a sample output of the video generated
http://youtu.be/SDb9GfVf9Lc
No matter what filter is applied the resultant video are all similar. (all skewed )
#Brad Larson (I hope you see this message), do you know what I can be doing wrong? I am using the latest XCode and source code of GPUImage. I also tried using the latest from CocoaPods as well. Both end up the same.
I assume you're trying to run this example via the Simulator. Movie playback in the Simulator has been broken for as long as I can remember. You need to run this on an actual device to get movie playback to work.
Unfortunately, one of the recent pull requests that I brought in appears to have introduced some crashing bugs even there, and I may need to revert those changes and figure out what went wrong. Even that's not an iOS version thing, it's a particular bug with a recent code addition. I haven't had the time to dig into it and fix it, though.