I'm trying to initialize GPUImageMovie with an AVAsset, which is an AVMutableComposition. However, it will only show the last frame after it has run all benchmark frames. If I use initWithURL it works fine, but I don't want to generate a file.
I've also tried creating an AVAsset from a file URL with the same result.
_movieFile = [[GPUImageMovie alloc] initWithAsset:_myAsset];
_movieFile.runBenchmark = YES;
_movieFile.playAtActualSpeed = YES;
_filterView = [[GPUImageView alloc] initWithFrame:[self windowSize]];
Any ideas why this doesn't work?
Solved it. What I did was creating a GPUImageMovieWriter but not actually calling [movieWriter startRecording]
Related
I am trying to capture video with PBJVision.
I set up camera like this
vision.cameraMode = PBJCameraModeVideo;
vision.cameraOrientation = PBJCameraOrientationPortrait;
vision.outputFormat = PBJOutputFormatWidescreen;
And this produces output 1280x720 where 1280 is width.
Setting orientation to Landscape ROTATES the stream.
I have been trying to record video with GPUImage, and there I can
videoCamera = [[GPUImageVideoCamera alloc] initWithSessionPreset:AVCaptureSessionPreset1280x720
cameraPosition:AVCaptureDevicePositionBack];
videoCamera.outputImageOrientation = UIInterfaceOrientationPortrait;
_movieWriter = [[GPUImageMovieWriter alloc] initWithMovieURL:_movieURL size:CGSizeMake(720.0, 1280.0)];
So that i get vertical output.
I would like to achieve vertical output for PBJVision, because I experience problems with GPUImage writing video to disk. (I will make another question for that).
What method/property of AVFoundation is responsible for giving the vertical output instead of horizontal?
Sorry for the question, I have been googling 2 days - can't find the answer.
I was having the same issue. I changed the output format to vision.outputFormat = PBJOutputFormatPreset; and I'm getting that portrait/vertical output.
When I publish a stream on iOS, TokBox uses the default camera. Is there a way to add live filters to the publisher?
I just want some easy, sample code on how to create a filter and attach it to the opentok publisher object (OTVideoCapture).
Or, if that's not the right way to do it...attaching the filter on the subscriber side works too.
How can this be done easily?
As I understand you want to apply filters before sending video data and also in real time. There is no easy source code here but I could tell you path.
For real time video filters you could use GPUImage framework. It has ready to use camera GPUImageVideoCamera class. So you need to create class which implements GPUImageInput (it is target in terms of GPUImage) which will produce OTVideoFrame frame from input and add it to pipeline.
Something like this:
videoCamera = [[GPUImageVideoCamera alloc] initWithSessionPreset:AVCaptureSessionPreset640x480 cameraPosition:AVCaptureDevicePositionBack];
videoCamera.outputImageOrientation = UIInterfaceOrientationPortrait;
videoCamera.horizontallyMirrorFrontFacingCamera = NO;
videoCamera.horizontallyMirrorRearFacingCamera = NO;
// filter
filter = [[GPUImageSepiaFilter alloc] init];
[videoCamera addTarget:filter];
// frame producer for OTVideoCapture
frameProducer = [[FrameProducer alloc] init];
[filter addTarget:frameProducer];
// camera view to show what we record
[filter addTarget:filterView];
Also you need custom implementation of OTVideoCapture protocol for OpenTok itself. You could use TBExampleVideoCapture from Lets-Build-OTPublisher sample as a start point. You need to replace camera code with above GPUImageVideoCamera camera code to use filters in real time.
I am using the impressive GPUImage framework for iOS in order to filter the incoming stream of the camera, and save it afterwards. When it is done, I want to close the containing view, and open another that has the playback option, for viewing the movie you just shot. Thing is, when I do nothing fancy, I can finish the recording, save the file to the temp file, and copy that to the camera roll synchronically. But when I use input sound and filters, the GPUImageMovieWriter seems to slow down too much, and produces empty frames to the beginning or ending of the video.
To set up the filters and stuff, and make the preview of the live filtered image I do this:
videoCamera=[[GPUImageVideoCamera alloc] initWithSessionPreset:AVCaptureSessionPresetiFrame960x540 cameraPosition:AVCaptureDevicePositionBack];
movieWriter = [[GPUImageMovieWriter alloc] initWithMovieURL:outputURL size:CGSizeMake(self.view.frame.size.width, self.view.frame.size.height)];
movieWriter.encodingLiveVideo = YES;
movieWriter.shouldPassthroughAudio = YES;
videoCamera.audioEncodingTarget = movieWriter;
_filteredVideoView = [[GPUImageView alloc]initWithFrame:CGRectMake(0.0, 0.0, self.view.frame.size.width, self.view.frame.size.height)];
[self.view addSubview:_filteredVideoView];
blendFilter = [[GPUImageNormalBlendFilter alloc]init];
sourcePicture = [[GPUImagePicture alloc] initWithCGImage:[mc getImage]];
[sourcePicture processImage];
[blendFilter addTarget:_filteredVideoView];
[videoCamera blendFilter];
[videoCamera startCameraCapture];
where all variables like outputURL and mc are all properly set. To start the actual recording I call this:
[blendFilter addTarget:movieWriter];
[movieWriter startRecording];
To stop recording and save to the camera roll I have this code:
videoCamera.audioEncodingTarget = nil;
[movieWriter finishRecording];
[blendFilter removeTarget:movieWriter];
UISaveVideoAtPathToSavedPhotosAlbum(outputPath, nil, NULL, NULL);
[self gotoNextView];
Probably the file starts recording before the frames are coming through, and/or the file is being copied to the camera roll while the temp file is still being written. How do I make sure everything gets on the file, and how do I get notified when it's done so I can move to the next view?
[self.movieWriter finishRecordingWithCompletionHandler:handler]
Putting your save video code into the completion handler block will wait until the video is done recording.
I have implemented a group filter(GPUImageSepiaFilter, GPUImageExposureFilter, GPUImageSepiaFilter) for image editing. And I have one slider which is used to set custom "Exposure" (setExposure:) value. On the "didValueChanged" action method of slider, I am refreshing the image preview by calling "picture processImage". If I move the slider very fast or when repeatedly scrolling the slider, app crashes for sure due to memory issue.
- (void)viewDidLoad {
[super viewDidLoad];
self.originalPicture = [[GPUImagePicture alloc] initWithImage:[UIImage imageNamed:#"IMG_0009.JPG"]];
self.filterGroup = [[GPUImageFilterGroup alloc] init];
GPUImageSepiaFilter *sepiaFilter = [[GPUImageSepiaFilter alloc] init];
[self.filterGroup addFilter:sepiaFilter];
GPUImageExposureFilter *pixellateFilter = [[GPUImageExposureFilter alloc] init];
[pixellateFilter setExposure:0.0f];
[self.filterGroup addFilter:pixellateFilter];
GPUImageSaturationFilter *saturation = [[GPUImageSaturationFilter alloc] init];
[self.filterGroup addFilter:saturation];
[sepiaFilter addTarget:pixellateFilter];
[pixellateFilter addTarget:saturation];
[self.filterGroup setInitialFilters:[NSArray arrayWithObjects:sepiaFilter, pixellateFilter,nil]];
[self.filterGroup setTerminalFilter:saturation];
[self.originalPicture addTarget:self.filterGroup];
GPUImageView *filterView = [[GPUImageView alloc] init];
self.view = filterView;
[self.filterGroup addTarget:filterView];
[self.originalPicture processImage];
[self.slider setMinimumTrackTintColor:[UIColor redColor]];
[self.slider setMaximumTrackTintColor:[UIColor greenColor]];
[self.view addSubview:self.slider];
}
- (IBAction)didChangeValue:(id)sender {
GPUImageExposureFilter *filter = (GPUImageExposureFilter *)[self.filterGroup filterAtIndex:1];
[filter setExposure:self.slider.value];
[self.originalPicture processImage];
}
Which is the best way to fix this? Or am I doing anything wrong?
Thanks,
Srinivas
Mine are just some advices about images and GPUImage:
The "disk" size image it doesn't represent the real image size, that's because it could have been compressed. The real image size, when it's decompressed in memory and the system ha to handle it is: height*width*n°channel*n°bit_for_each_channel
Images should be always loaded lazily, is useless have them around if you are not using them
Brad has made a huge change in its framework about framebuffer reuse that had a major improvement on how memory is handled, are you sure that you are using the last version on github?
have you tried to profile the app with allocation instruments, maybe the problem is somewhere else, with this tool you can see if the memory grows where you expect
imageNamed method caches images and even if they say that this memory will be evicted in memory pressure situation I never had the occasion to see that purge working
I'm not seeing anything wrong with your code in using GPUImage, but I would try to use smaller images (in pixel size) and first of all use allocations.
I create a movie using UIImagePickerController using the following:
self.cameraUI = [[UIImagePickerController alloc] init];
self.cameraUI.sourceType = UIImagePickerControllerSourceTypeCamera;
self.cameraUI.mediaTypes = #[(NSString *)kUTTypeMovie];
self.cameraUI.delegate = self;
self.cameraUI.videoQuality = UIImagePickerControllerQualityTypeMedium;
self.cameraUI.cameraDevice = UIImagePickerControllerCameraDeviceFront;
In the playback of the video I see that it looks good.
Then I try to load the movie with MPMoviePlayerViewController
self.player = [[MPMoviePlayerViewController alloc] initWithContentURL:self.chacha];
self.player.moviePlayer.movieSourceType = MPMovieSourceTypeFile;
[self presentMoviePlayerViewControllerAnimated:self.player];
For a split second when the movie view controller is presented the aspect ratio goes from warped to correct.
Anyone know what might be causing this? I have messed with settings for the movie controller and nothing there I have tried has helped.
Perhaps you could render it off-screen, or set it's Hidden property to YES until it finishes loading and you're sure the aspect ratio is correct.
I don't have much experience with that player.