I have issue with cropping video and also need to expand video
cropping frame like native application, whenever User tapped on
cropping fame I need to expand video frames(thumbnails) and also want
to crop it, but I don't understand how to do it.
Here's a link to an open source control that you can use as a foundation.
https://github.com/andrei200287/SAVideoRangeSlider
This control already generates thumbnails from the source video and draws a linear trim bar with draggable handles that let you set the start/end point of the video. All you'd need to do is add some code to reload the trim bar with detailed images from around the time of the handle that's being dragged at the moment.
This control also comes with a sample app that exports the trimmed video from the selected start and end points.
Hopefully that's what you're looking for.
Related
I am currently working on an app where we would like to download a PDF from a remote server and then draw on it. We would like to draw Google Maps pin-like annotations on the PDF (the static draw part). Furthermore, we would like to detect if a user has touched a pin and then draw a calloutBox over this PDF (dynamic draw part). We obviously would like the pdf to be scrollable/zoomable. Does anyone know of a good way to achieve this?
Things I have researched:
1) Render in a UIWebView. This seems like a great solution but its not clear to me how to then implement the draw code on the PDF. I have heard people say create a transparent UIView above the UIWebView for the drawing. This seems to come with its issues, how will it handle zooming and scrolling?
2) Use Quartz 2D and generate my own PDF from the PDF I fetch from the server. As I draw my own PDF content I can draw the static marker pins. Once I have this PDF, I can then shove it in a WebView. The problem with this approach however is I still need to handle the dynamic drawing of the call-out boxes when a user taps on the pin and this then kinda takes me back to problem 1.
You're correct that Apple does not offer much in terms of this issue. There's UIWebView which can preview and show PDF documents, but it's really not suited to adding annotations, and any "solution" with views will be very fragile, if you manage to do it at all. It's meant as a black box to read PDF documents, not for annotating.
You have to go all the way back to CGContextRef and take over the scrolling, zooming and touch handling/drawing yourself. Apple's ZoomingPDFViewer example is a good start.
I have been working on this problem since 2010 and we offer a commercial solution for PDF annotating for iOS, Android and Web called PSPDFKit. We ship a custom renderer which is better and more exact than Apple's CoreGraphics renderer, but the more interesting part is that we can deal with all common PDF annotation types. You can use note annotations to represent your pins and move them around, add notes, interact/override the default tap handling (and e.g. show your own popover when people tap on them). They are also always the same size - so they can be anchored at an exact point in the PDF and then you can zoom in while they stay the same size. The best part is that this is all part of the PDF spec, so they will also work with Apple's Preview app or Adobe Acrobat, so people can save/customize the markup and then everything can be saved in the PDF. The architecture is flexible so you can also simply save everything in a database or sync it back up to your server and simply use it for touch handling.
You can also build that yourself - the basic architecture is a UIScrollView and views that are managed. It quickly gets tricky when you do zooming and have views that need to stay the same size + touch handling and maybe you also want things like multi-select or regular ink drawing. You will also want to add some sort of image caching layer, since rendering PDF documents can be quite slow on mobile devices. Oh, and if you want to make text selectable or implement search, be ready for a rabbit hole that is called the Adobe CMap and CIDFont
Files Specification.
I want to track the relative position of a camera aimed at a computer screen.
I can’t control what is displayed on the computer screen but I can receive screen dumps whenever something changes on the screen. Those screen dumps can hopefully be used to find the screen when analyzing the video from the camera.
I see many videos on youtube for face, logo or single colored objects tracking using OpenCV but I’m unsure those methods would work finding and tracking a more detailed image like a screen dump.
Maybe Template Matching is the way to go? But I need to find the screen even at an angle.
Basically I don’t know where to begin and need help from people with experience in this field to find the best way for achieving what I want.
Thanks
Using feature matching should do the trick (Sift/SURF/ORB/...)
I have a short video of 10 mins. This video is actually an online lecture. When you watch it, you will only see slide show (some slides are annotated).
I have the original slides (pdf or image or ppt or whatever). Is it possible to match each slide with a specific time in video when it appears?
My idea is to take every image and compare it with every video frames of that video and try to match the slide image in video.
How do you think my idea? Is it possible and doable with some algorithm?Can I just substract the video frame with the image (calculate the difference) to see which difference is close to zero? Thanks
If the images are perfectly aligned, then you can use any of simple differencing, sum of squared differences or normalised cross-correlation. However, if they are not aligned, you will need to register the two images first, followed by any of the three mentioned matching methods. Do a google search for image registration. Affine registration might be sufficient for your problem.
I'm working on an iPad app that records and plays videos using AVFoundation classes. I have all of the code for basic record/playback in place and now I would like to add a feature that allows the user to draw and make annotations on the video—something I believe will not be too difficult. The harder part, and something that I have not been able to find any examples of, will be to combine the drawing and annotations into the video file itself. I suspect this is part is accomplished with AVComposition but have no idea exactly how. Your help would be greatly appreciated.
Mark
I do not think that you can actually save a drawing into a video file in iOS. You could however consider using a separate view to save the drawing and synchronize the overlay onto the video using a transparent view. In other words, the user circled something at time 3 mins 42 secs in the video. Then when the video is played back you overlay the saved drawing onto the video at the 3:42 mark. It's not what you want but I think it is as close as you can get right now.
EDIT: Actually there might be a way after all. Take a look at this tutorial. I have not read the whole thing but it seems to incorporate the overlay function you need.
http://www.raywenderlich.com/30200/avfoundation-tutorial-adding-overlays-and-animations-to-videos
All I need to do is capture an image, and all I can find is complicated code on capturing video or multiple frames. I can't use UIImagePickerController because I do not want to see the camera shutter animation and I have a custom overlay, and my app is landscape only. What is the simplest way to manually capture an image from the front or back live camera view in the correct orientation? I don't want to save it to the camera roll, I want to present it in a view controller for editing.
Take a look to the SquareCam (http://developer.apple.com/library/ios/#samplecode/SquareCam/Introduction/Intro.html) example from Apple. It contains all what you need for high-quality capture of images. I recently copy-pasted the code from this project myself where I solved the same task as you. It works well :)