Show only a fragment from stream - ios

I am wondering if there is a possibility to show only a fragment of a video stream. Let's say we have a video 1920x1024 and I want to show only a rect from 0,0 to 600,400.
Can I achieve this? What kind of libraries can do that? I tried to find something in VLC but i did not find anything.
Something similar to this:
https://www.youtube.com/watch?v=8nNoUDH2k3Y
Thanks

You should be able to achieve this yes, however I am not quite sure if you can achieve exactly the same dynamic cropping effect shown in the video you linked to.
For VLC the term crop is what you're looking for. The wiki is quite minimal, however it should get you started. Otherwise check out the command line help (vlc -H) and look for the crop and padd parameters.
An example crop such as in your case 0,0 600,400 would be achieved through the video filter named croppadd or the vout filter crop (depending on your VLC version and platform)
vlc $video_input --sout='#transcode{vcodec=XXX,vb=XXX,fps=XX.X,width=1920,height=1024,vfilter=croppadd{croptop=0,cropbottom=624,cropleft=0,cropright=1320}}:standard{access=XXX,mux=XX,dst=XXX.XXX.XXX.XXX}'
vfilter
vfilter=croppadd{croptop=0,cropbottom=624,cropleft=0,cropright=1320}
Another way to use it as stated in the wiki
vlc $video_input --video-filter=croppadd --croppadd-croptop=0 --croppadd-cropbottom=624 --croppadd-cropleft=0 --croppadd-cropright=1320
For the library VLC wise it would be libvlc and the corresponding LibVLC video controls get and set_crop_geometry - See this link for the SDK.
Maybe this leads you in the right directions. It certainly depends on your setup. Are you scripting and restranscoding a video stream or are you trying to integrate a library into your code.

Related

Is it possible to preview ffmpeg's output using ffplay in react native?

Am building a mobile-based video editor using ffmpeg-kit/react-native https://github.com/tanersener/ffmpeg-kit/tree/main/react-native. when I use complex filter it took me approx 0.5-1 mins to apply filter and then I become able to present how the filter looks like.
What I want is to show a real quick preview before applying the filter. May applications like TikTok are using gl-view to mimic the filter before applying it (to achieve the real quick preview). After exploring FFmpeg I came to know ffplay https://ffmpeg.org/ffplay.html allows us to preview filters before actually applying them. So the question is how can I use ffplay in my mobile app android or ios. I tried exploring mobile-FFmpeg but not able to find any clue. Please ignore my mistakes in English

How to detect an image in a news paper and play a video relevant to it using augmented reality?

I have planned to detect an image in a news paper play the video relevant to it. I have seen several news paper reading AR apps include this feature. But i couldn't find how to do so. How can I do it??
I dont expect any code. But like to know what are the steps I should follow to do this. Thank you.
You need to browse through the available marker-based AR SDKs - such SDKs let you defined in advance the database of images you would like to detect and respond to, and once any of these images is detected during runtime, you get some kind of an event with data on the detected image.
Vuforia is considered a good one and it has good samples, so it is supposed to be easier to start with. You should also check out Kudan, and there are more.

Robot framework, how to compare sound, video file

I have sound, video source file and I have to verify my program which open and play this file is work correctly.
I don't know how to verify file like this!
I think i should capture (sound/video) and then compare it to source file.
Till this time, I've searched on the internet but didn't get any solution.
This is going to be a real challange for you, I personally have never done this but hopefully I can provide you with some help to set you on your way...
First you need to know that robotframework is run on python so anything you will need to be in python or have python bindings so asking there may be a good start.
In terms of capturing sound I believe it would be eaiser to use a program with a api you can use, I found a document here of someone doing this, as to whether this is still correct I am not sure:
http://www.nektra.com/files/DirectSound_Capture_With_Deviare.pdf
For video capture try looking here:
https://www.youtube.com/watch?v=j344j34JBRs
Next would be stripping the video, seperating the audio and video frames and comparing them seperatly. For this you are going to need a video editor, audio comparison library and a tool for comparing images.
In terms of how this would work I dont know as I have never done this...
Why do you need to do this tho, is there not a better way of doing this? Does you application make the video? In which case could just doing some checks on frames, length, file size suffice? You need to provide for information.
This is a bit long for a comment but this answer is incomplete.
Let me know how you get on?

Adding watermark to currently recording video and save with watermark

I would like to know if there is any way to add a watermark to a video which is currently recording and save it with the watermark. (I know about adding watermarks to video files that are already available in app bundle and exporting it with watermark).
iPhone Watermark on recorded Video.
I checked this link. Accepted answer is not good one. The most voted answer is applicable only if there is already a video file in your bundle. (Please read the answer before suggesting that.)
Thanks in advance
For this purpose its better to use GPUImage library (An open source library available in Git hub), It contains so many filter and its possible add overlay using GPUImageOverlayBlendFilter. That contains sample FilterShowCase that explains lot about using the filters. It uses GPU so that it takes the overhead of processing the image. The full credits goes to #Brad Larson the one who created such a great library.

Streaming opencv Video

I need some ideas about how to stream video feed coming from opencv to a webpage. I currently have gStreamer, but I don't know if this is the right tool for the job. Any advice on using gStreamer or any hyperlinks to tutorials would be helpful and appreciated!
Thanks!
OpenCV doesn't provide an interface for streaming video, which means that you'll need to use some other techonology for this purpose.
I've used GStreamer in several professional projects: this is the droid you are looking for.
I do not have any experience w/ streaming OpenCV output to a website. However I'm sure this is possible using gstreamer.
Using a gstreamer stream, it is possible to get data and convert the data in to OpenCV format. I recommend you read up on GstAppSink and GstBuffer.
Basically, if I remember correctly, you must run a pipeline in the a background thread. Then using some function in gst_app_sink, you can get the buffer data from the sink.
A quick lookup on the issue, you had to use GST_BUFFER_DATA for this
I remember having to convert the result from yCBCr to bgr, a collegue had problems as the conversion of opencv was inadequate. So you might have to write your own. (This was back in the IplImage* days)

Resources