I want to know if setting a delay in seconds to the camera feed is possible on iOS with Swift.
Let's say I add a 30 second delay to my camera. If I am pointing the camera at a person who is crouching down, and then that person stands up, my camera is not going to show that until those 30 seconds have passed. In other words, it will continue to show the person crouched until that time has passed.
how can I achieve this?
This sounds extremely difficult. Problem is camera recording is bringing in (most likely) 30 frames a second. So those buffers from 30 seconds ago need to be saved somewhere, and in raw form that is a TON of data so you can't just keep those in memory.
You don't want to write as images because the compression is garbage compared to video.
Perhaps the best way would be to setup your capture, then capture your video in say 10 second segments, save to file, then display those video file segments on an AVPlayer or something. That way it's 'delayed' for the user.
Either way, from my understanding of how things work, the main challenge is what to do with those buffers while waiting to display them. You could potentially stream it somewhere, but then you need a whole backend to support that and then stream it back, seems silly.
Related
Is it possible to apply slow motion effect while recording video?
This means that the recording has not finished yet, the file has not been saved, but the user sees the recording process in slow motion.
I think it is important to understand what slow-motion actually means. To "slow down motion" in a movie, you need to film more images per second than usually and play this movie afterwards in normal speed, that's making the slow motion effect.
Example: Videos are often shot in 30 frames per second (fps), so for one second of movie you're creating 30 single images. If you want a motion to be half as fast, you need to shoot 60 fps (60 images per second). If you play those 60 images at half-speed (the normal 30fps), it will result in a movie of 2 second lengths showing the slow-motion effect.
As you can see, you cannot record and show a slow-motion effect at the same time. You'll need to save it first and then play it slower than recorded.
I would like to build an App about capture a Live Photo but I have no idea How to record the video 1.5 seconds before the button is pressed。
I think the basic idea is to keep recording to a buffer. The buffer should hold the last n seconds of video (3 seconds in your case). When the button is pressed copy the buffer.
I haven't done any development in ios, so i will be of no use in providing the code.
I have two solutions to this problem:
SOLUTION A
Convert the asset to an AVMutableComposition.
For every second keep only one frame , by removing timing for all the other frames using removeTimeRange(...) method.
SOLUTION B
Use the AVAssetReader to extract all individual frames as an array of CMSampleBuffer
Write [CMSampleBuffer] back into a movie skipping every 20 frames or so as per requirement.
Convert the obtained video file to an AVMutableComposition and use scaleTimeRange(..) to reduce overall timeRange of video for timelapse effect.
PROBLEMS
The first solution is not suitable for full HD videos , the video freezes in multiple place and the seekbar shows inaccurate timing .
e.g. A 12 second timelapse might only be shown to have a duration of 5 seconds, so it keeps playing even when the seek has finished.
I mean the timing of the video gets all messed up for some reason.
The second solution is incredibly slow. For a 10 minute HD video the memory would run upto infinity since all execution is done in memory.
I am searching for a technique that can produce a timelapse for a video right away , without waiting time .Solution A kind of does that , but is unsuitable because of timing problems and stuttering.
Any suggestion would be great. Thanks!
You might want to experiment with the inbuilt thumbnail generation functions to see if they are fast/effecient enough for your needs.
They have the benefit of being optimised to generate images efficiently from a video stream.
Simply displaying a 'slide show' like view of the thumbnails one after another may give you the effect you are looking for.
There is iinfomrtaion on the key class, AVAssetImageGenerator, here including how to use it to generate multiple images:
https://developer.apple.com/reference/avfoundation/avassetimagegenerator#//apple_ref/occ/instm/AVAssetImageGenerator/generateCGImagesAsynchronouslyForTimes%3acompletionHandler%3a
I want the user the record a continuos video at 240 FPS using iPhone 6. When the user presses a button, it should just save the last 10 seconds on file system for a later viewing pleasure.
How do I achieve this? I haven't found any good answer to this task.
I'm toying around with AVCaptureSession and AVCaptureMovieFileOutput which automatically writes the stream onto the disk.
One approach is to not use AVCaptureMovieFileOutput and build a ring buffer in memory and on button press, just save those 10 seconds onto the disk. Achievable using NSData?
Second approach I can think of is just use AVCaptureMovieFileOutput as is, but on button press, just keep the last 10 seconds and cut off the rest to save space.
Currently I don't have any clue how I can do any of those 2 approaches. Can someone point me to the right direction?
I am attempting to record an animation (computer graphics, not video) to a WMV file using DirectShow. The setup is:
A Push Source that uses an in-memory bitmap holding the animation frame. Each time FillBuffer() is called, the bitmap's data is copied over into the sample, and the sample is timestamped with a start time (frame number * frame length) and duration (frame length). The frame rate is set to 10 frames per second in the filter.
An ASF Writer filter. I have a custom profile file that sets the video to 10 frames per second. Its a video-only filter, so there's no audio.
The pins connect, and when the graph is run, a wmv file is created. But...
The problem is it appears DirectShow is pushing data from the Push Source at a rate greater than 10 FPS. So the resultant wmv, while playable and containing the correct animation (as well as reporting the correct FPS), plays the animation back several times too slowly because too many frames were added to the video during recording. That is, a 10 second video at 10 FPS should only have 100 frames, but about 500 are being stuffed into the video, resulting in the video being 50 seconds long.
My initial attempt at a solution was just to slow down the FillBuffer() call by adding a sleep() for 1/10th second. And that indeed does more or less work. But it seems hackish, and I question whether that would work well at higher FPS.
So I'm wondering if there's a better way to do this. Actually, I'm assuming there's a better way and I'm just missing it. Or do I just need to smarten up the manner in which FillBuffer() in the Push Source is delayed and use a better timing mechanism?
Any suggestions would be greatly appreciated!
I do this with threads. The main thread is adding bitmaps to a list and the recorder thread takes bitmaps from that list.
Main thread
Animate your graphics at time T and render bitmap
Add bitmap to renderlist. If list is full (say more than 8 frames) wait. This is so you won't use too much memory.
Advance T with deltatime corresponding to desired framerate
Render thread
When a frame is requested, pick and remove a bitmap from the renderlist. If list is empty wait.
You need a threadsafe structure such as TThreadList to hold the bitmaps. It's a bit tricky to get right but your current approach is guaranteed to give to timing problems.
I am doing just the right thing for my recorder application (www.videophill.com) for purposes of testing the whole thing.
I am using Sleep() method to delay the frames, but am taking great care to ensure that timestamps of the frames are correct. Also, when Sleep()ing from frame to frame, please try to use 'absolute' time differences, because Sleep(100) will sleep about 100ms, not exactly 100ms.
If it won't work for you, you can always go for IReferenceClock, but I think that's overkill here.
So:
DateTime start=DateTime.Now;
int frameCounter=0;
while (wePush)
{
FillDataBuffer(...);
frameCounter++;
DateTime nextFrameTime=start.AddMilliseconds(frameCounter*100);
int delay=(nextFrameTime-DateTime.Now).TotalMilliseconds;
Sleep(delay);
}
EDIT:
Keep in mind: IWMWritter is time insensitive as long as you feed it with SAMPLES that are properly time-stamped.