Adding watermark to currently recording video and save with watermark - ios

I would like to know if there is any way to add a watermark to a video which is currently recording and save it with the watermark. (I know about adding watermarks to video files that are already available in app bundle and exporting it with watermark).
iPhone Watermark on recorded Video.
I checked this link. Accepted answer is not good one. The most voted answer is applicable only if there is already a video file in your bundle. (Please read the answer before suggesting that.)
Thanks in advance

For this purpose its better to use GPUImage library (An open source library available in Git hub), It contains so many filter and its possible add overlay using GPUImageOverlayBlendFilter. That contains sample FilterShowCase that explains lot about using the filters. It uses GPU so that it takes the overhead of processing the image. The full credits goes to #Brad Larson the one who created such a great library.

Related

How to parse a MPD manifest video file and get segments of an image adaptation set?

I am using mpeg-dash mpd file to stream video using videoJS.
I am trying to display thumbnail of the video while using the seek bar.
The adaptation set for image is received on the manifest file. Now I am trying to parse the mpd file and get segments out of it. How can i achieve this using javascript?
I tried parsing the manifest file using https://www.npmjs.com/package/mpd-parser this plugin but this picks up only segments for Audio, video, subtitle and closed caption.
Is there a plugin which handles the same for image adaptation set?
As I think you know, the images are in a separate adaptation set - from the DASH interoperability spec (https://dashif.org/docs/DASH-IF-IOP-v4.3.pdf):
For providing easily accessible thumbnails with timing, Adaptation Sets with the new #con- tentType="image" may be used in the MPD. A typical use case is for enhancing a scrub bar with visual cues. The actual asset referred to is a rectangular tile of temporally equidistant thumb- nails combined into one jpeg or png image. A tile, therefore is very similar to a video segment from MPD timing point of view, but is typically much longer.
and
It is typically expected that the DASH client is able to process such Adaptation Sets by download- ing the images and using browser-based processing to assign the thumbnails to the Media Presen- tation timeline.
It sounds like you want a tool or some code to allow you to be able to view the thumbnails - some players provide this at a user level, e.g. see TheoPlayer info here:
https://www.theoplayer.com/blog/in-stream-thumbnail-support-dvr-dash-streams
You can also leverage and possibly reuse the parsing that is already built into an open source player - see this discussion in the Shaka Player support issues which provides the method to parse and retrieve thumbnails and also the thumbnail format itself:
https://github.com/google/shaka-player/issues/3371#issuecomment-828819282
The above thread contains some example code to extract images also.

How to detect an image in a news paper and play a video relevant to it using augmented reality?

I have planned to detect an image in a news paper play the video relevant to it. I have seen several news paper reading AR apps include this feature. But i couldn't find how to do so. How can I do it??
I dont expect any code. But like to know what are the steps I should follow to do this. Thank you.
You need to browse through the available marker-based AR SDKs - such SDKs let you defined in advance the database of images you would like to detect and respond to, and once any of these images is detected during runtime, you get some kind of an event with data on the detected image.
Vuforia is considered a good one and it has good samples, so it is supposed to be easier to start with. You should also check out Kudan, and there are more.

How to do Chroma Keying in ios to process a video file using Swift

I am working on a video editing application.
I need a function to do chroma keying and replace green background in a video file (not live feed) with an image or another video.
I have looked through the GPUImage framework but it is not suitable for my project as I cannot use third party frameworks for this project so I am wondering if there is another way to accomplish this.
Here are my questions:
Does it need to be done through Shaders in Opengl es?
Is there another way to access the frames and replace the background doing chroma keying using the AV Foundation framework?
I am not too versed in Graphics processing so I really appreciate any help.
Apple's image and video processing framework is CoreImage. There is sample code for Core Image which includes a ChromeKey example.
No it's not in Swift, which you asked for. But as #MoDJ says; video software is really complex, reuse what already exists.
I would use Apple's Obj-c sample code to get something that works then, if you feel you MUST have it in Swift, port it little by little and if you have specific problems ask them here.

ios: Adding real time filter effects to photos

I want to program an ios app that takes photos but I would like to filter the photo preview in real time. What I mean is implemented in the app called "CamWow" (here is a video of the app: http://www.youtube.com/watch?v=L_o-Bx08YZE ). I curious how this can be done. Has anybody an idea how to build such an app that provides a filtered real time preview of the photo and captures a filtered photo?
As Fraggle points out, on iOS 5.0, you can use the Core Image framework to do image filtering. However, Core Image is limited to the filters they ship with the framework, and I've found that it is not able to process video in realtime in many cases.
As a result, I created my BSD-licensed open source GPUImage framework, which encapsulates the OpenGL ES 2.0 code you need to do GPU-accelerated processing of images and video. I have some examples of the kind of filtering you can do with this in this answer, and you can easily write your own custom filters using the OpenGL Shading Language. The sample applications in the framework show how to do filtering of images with live previews, as well as how to filter and save them out to disk.
I'm looking for the same kind of info (its a pretty hot sector so some devs may not be willing to give up the goods just yet). I came across this, which may not be exactly what you want, but could be close. Its a step by step tutorial to process live video feed.
Edit: I've tried the code that was provided in that link. It can be used to provide filters in real time. I modified the captureOutput method in ViewController.m , commented out the second filtering step ("CIMinimumCompositing") and inserted my own filter (I used "CIColorMonochrome").
It worked. My first few attempts failed because not all filters in Core Image Filter reference are available for iOS apparently. There is some more documentation here.
Not sure if this code is the best performance wise, but it does work.
Edit #2: I saw some other answers on SOverflow that recommended using OpenGL for processing which this sample code does not do. OpenGL should be faster.

Compare two audio files in iOS

I want to record two voices and compare them. I think there is some Apple sample code for voice recording. I have no idea about
comparing two audio files. What is the right approach for this? Is there any framework Apple provides for this purpose or is there any third party framework?
It's not in objective C, but it does contain some fantastic explanation about how audio is compared by Shazam, and includes sample code (and source for a working application) in Java:
Check this out
Additionally, This Question has a fantastic link to audio fingerprinting, which is essentially the same as the article above, but more in depth.
Hope this helps
I'm using Visqol for this purpose. If your audio files are generally not more than 10sek this could be something worth looking into. Also check ffmpeg library for converting the files into the desired format(Visqol will require certain sample rate depending if it is just music or speech).
https://github.com/google/visqol

Resources