I'm trying to add a GPUImagePicture and a GPUImageUIElement on a video.
GPUImageUIElement is working, but I've a problem with the GPUImagePicture because I only see it on the first frame and then it disappears.
Here's my code:
filter = [[GPUImageBrightnessFilter alloc] init];
[(GPUImageBrightnessFilter *)filter setBrightness:0];
GPUImageAlphaBlendFilter *blendFilter = [[GPUImageAlphaBlendFilter alloc] init];
blendFilter.mix = 1.0;
GPUImagePicture *overlay = [[GPUImagePicture alloc] initWithImage:[UIImage imageNamed:#"Logo.png"] smoothlyScaleOutput:YES];
GPUImageTransformFilter *transformFilter = [GPUImageTransformFilter new];
[transformFilter forceProcessingAtSize:CGSizeMake(73, 83)];
[transformFilter setAffineTransform:CGAffineTransformMakeScale(0.7, 0.7)];
[overlay addTarget:transformFilter];
[overlay processImage];
UIView *subview1 = [[UIView alloc] initWithFrame:CGRectMake(0, 0, 320, 400)];
UILabel *temperaturaText = [[UILabel alloc]initWithFrame:CGRectMake(77, 100, 105, 60)];
temperaturaText.text = #"test";
[subview1 addSubview:temperaturaText];
uiElementInput = [[GPUImageUIElement alloc] initWithView:subview1];
[filter addTarget:blendFilter];
[uiElementInput addTarget:blendFilter];
[blendFilter addTarget:filterView];
[overlay addTarget:filterView];
__unsafe_unretained GPUImageUIElement *weakUIElementInput = uiElementInput;
[filter setFrameProcessingCompletionBlock:^(GPUImageOutput *filter, CMTime frameTime) {
[weakUIElementInput update];
}];
[blendFilter addTarget:movieWriter];
Your problem is that you define your input picture of overlay as a local variable within your setup method. You aren't holding onto a strong reference to it, so it will be deallocated the instant you finish this method, which also will remove the image texture and its output from your processing pipeline.
If you want to hold on to an input image, you need to make overlay an instance variable of your class, like you did for your camera or movie input in the above. Then it will persist to be used by the framework as input.
Related
I am trying to overlay some text on a video and have not had any success so far.
videoCamera = [[GPUImageStillCamera alloc] initWithSessionPreset:AVCaptureSessionPreset640x480 cameraPosition:AVCaptureDevicePositionBack];
videoCamera.outputImageOrientation = UIInterfaceOrientationPortrait;
cropFilter = [[GPUImageCropFilter alloc] initWithCropRegion:CGRectMake(0, 0, 1, 1)];
mCurrentImage = [UIImage imageNamed:#"tex16"];
sourcePicture = [[GPUImagePicture alloc] initWithImage:mCurrentImage smoothlyScaleOutput:NO];
[sourcePicture processImage];
customFilter = [[GPUFilter alloc] initWithFragmentShaderFromFile:#"shader"];
[videoCamera addTarget:cropFilter];
[cropFilter addTarget:customFilter atTextureLocation:0];
[sourcePicture addTarget:customFilter atTextureLocation:1];
[customFilter addTarget:mViewCameraPreview];//(GPUImageView*)mViewCameraPreview];
blendFilter = [[GPUImageAlphaBlendFilter alloc] init];
blendFilter.mix = 1.0;
UILabel *timeLabel = [[UILabel alloc] initWithFrame:CGRectMake(0.0, 0.0, 240.0f, 320.0f)];
timeLabel.font = [UIFont systemFontOfSize:17.0f];
timeLabel.text = #"Time: 0.0 s";
timeLabel.textAlignment = NSTextAlignmentCenter;
timeLabel.backgroundColor = [UIColor clearColor];
timeLabel.textColor = [UIColor whiteColor];
uiElementInput = [[GPUImageUIElement alloc] initWithView:timeLabel];
[customFilter addTarget:blendFilter];
[uiElementInput addTarget:blendFilter];
[blendFilter addTarget:mViewCameraPreview];
[videoCamera startCameraCapture];
Everything complies and runs without throwing any exceptions, however there is no text to be found.
Does anyone see what I am doing wrong?
Thanks.
Did you try performing an update of UIElementInput right after starting camera capture ? If not, please try to add this code at the end of the code you provided.
__unsafe_unretained GPUImageUIElement *weakUIElementInput = uiElementInput;
[filter setFrameProcessingCompletionBlock:^(GPUImageOutput * filter, CMTime frameTime){
[weakUIElementInput update];
}];
If it doesn't work or if you've already did it, you should try with a basic filter (no custom filter) to see if the problem persists.
I keep getting this error on and off. I've seen some solutions that recommend using the GPUImageNormalBlendFilter in the filter chain however doing so has resulted in a solid grey colored output.
videoCamera = [[GPUImageVideoCamera alloc] initWithSessionPreset:AVCaptureSessionPreset1280x720 cameraPosition:AVCaptureDevicePositionFront];
filter = [[GPUImageFilter alloc] init];
[videoCamera addTarget:_filter];
blendFilter = [[GPUImageAlphaBlendFilter alloc] init];
blendFilter.mix = 1.0;
animatedImageView = [[UIImageView alloc] initWithFrame:CGRectMake(0, 0, 100, 100)];
[contentView addSubview:_animatedImageView];
[contentView addSubview:[self watermark]];
uiElementInput = [[GPUImageUIElement alloc] initWithView:contentView];
[filter addTarget:blendFilter];
[uiElementInput addTarget:blendFilter];
[blendFilter addTarget:filteredVideoView];
filtered video view is GPUImageView
You must use setFrameProcessingCompletionBlock and invoke uiElement's update method:
__weak typeof(self) weakSelf = self;
[filter setFrameProcessingCompletionBlock:^(GPUImageOutput *filter, CMTime frameTime) {
[weakSelf.uiElementInput update];
}];
[videoCamera startCameraCapture];
I am working on a project that requires a group of effects.
I am successfully using filterGroup as per the example in the FilterShowcase as follows:
filter = [[GPUImageFilterGroup alloc] init];
GPUImageSepiaFilter *sepiaFilter = [[GPUImageSepiaFilter alloc] init];
[(GPUImageFilterGroup *)filter addFilter:sepiaFilter];
GPUImagePixellateFilter *pixellateFilter = [[GPUImagePixellateFilter alloc] init];
[(GPUImageFilterGroup *)filter addFilter:pixellateFilter];
[sepiaFilter addTarget:pixellateFilter];
[(GPUImageFilterGroup *)filter setInitialFilters:[NSArray arrayWithObject:sepiaFilter]];
[(GPUImageFilterGroup *)filter setTerminalFilter:pixellateFilter]; code here
But now I would like to add a new filter to the group like GPUIMAGE_HARRISCORNERDETECTION this filter also requires a blend.
Here is the filter initialization:
filter = [[GPUImageHarrisCornerDetectionFilter alloc] init];
[(GPUImageHarrisCornerDetectionFilter *)filter setThreshold:0.20];
and then it requires the blending as follows:
GPUImageCrosshairGenerator *crosshairGenerator = [[GPUImageCrosshairGenerator alloc] init];
crosshairGenerator.crosshairWidth = 15.0;
[crosshairGenerator forceProcessingAtSize:CGSizeMake(480.0, 640.0)];
[(GPUImageHarrisCornerDetectionFilter *)filter setCornersDetectedBlock:^(GLfloat* cornerArray, NSUInteger cornersDetected, CMTime frameTime) {
[crosshairGenerator renderCrosshairsFromArray:cornerArray count:cornersDetected frameTime:frameTime];
}];
GPUImageAlphaBlendFilter *blendFilter = [[GPUImageAlphaBlendFilter alloc] init];
[blendFilter forceProcessingAtSize:CGSizeMake(480.0, 640.0)];
GPUImageGammaFilter *gammaFilter = [[GPUImageGammaFilter alloc] init];
[videoCamera addTarget:gammaFilter];
[gammaFilter addTarget:blendFilter];
[crosshairGenerator addTarget:blendFilter];
[blendFilter addTarget:filterView];
Is there a way to add the GPUImageCrosshairGenerator, GPUImageAlphaBlendFilter, & GPUImageGammaFilter to the filter group?
Thank you
More specific detail follows:
=============================================
Code that works based on FilterShowcase example:
The test class GPUImageDrawTriangleTest simply draws random triangles over the live video source
self.title = #"DRAWING TRIANGLES TESTING";
triangleFilter = [[GPUImageDrawTriangleTest alloc] init];
[((GPUImageDrawTriangleTest *)particleFilter) setDrawColorRed:1.0 green:0.0 blue:1.0];
filter = [[GPUImageContrastFilter alloc] init];
__unsafe_unretained GPUImageDrawTriangleTest *weakGPUImageTestCust = (GPUImageDrawTriangleTest *)triangleFilter;
[filter setFrameProcessingCompletionBlock:^(GPUImageOutput * filter, CMTime frameTime){
[weakGPUImageTestCust update:frameTime];
}];
blendingFilters = TRUE;
blendFilter = nil;
blendFilter = [[GPUImageAlphaBlendFilter alloc] init];
[blendFilter forceProcessingAtSize:CGSizeMake(640.0, 480.0)];
GPUImageGammaFilter *gammaFilter = [[GPUImageGammaFilter alloc] init];
[videoCamera addTarget:gammaFilter];
[gammaFilter addTarget:blendFilter];
blendFilter.mix = 1.0;
[triangleFilter addTarget:blendFilter];
[filter addTarget:blendFilter];
[blendFilter addTarget:filterView];
[filter addTarget:filterView];
[videoCamera addTarget:filter];
Based on the FilterShowcase example and the template of the GPUImageUnsharpMaskFilter group I created GPUImageParticleGroupTest
#import "GPUImageParticleGroupTest.h"
#import "GPUImageFilter.h"
#import "GPUImageGammaFilter.h"
#import "GPUImageDrawTriangleTest.h"
#import "GPUImageContrastFilter.h"
#import "GPUImageAlphaBlendFilter.h"
#import "GPUImageDrawTriangleTest.h"
#implementation GPUImageParticleGroupTest
- (id)init;
{
if (!(self = [super init]))
{
return nil;
}
contrastFilter = [[GPUImageContrastFilter alloc] init];
[self addFilter:contrastFilter];
gammaFilter = [[GPUImageGammaFilter alloc] init];
[self addFilter:gammaFilter];
triangleFilter = [[GPUImageDrawTriangleTest alloc] init];
[((GPUImageDrawTriangleTest *)triangleFilter) setDrawColorRed:1.0 green:0.0 blue:1.0];
//[self addFilter:triangleFilter];
__unsafe_unretained GPUImageDrawTriangleTest *weakGPUImageTestCust = (GPUImageDrawTriangleTest *)triangleFilter;
[ contrastFilter setFrameProcessingCompletionBlock:^(GPUImageOutput * contrastfilter, CMTime frameTime){
[weakGPUImageTestCust update:frameTime];
}];
theblendFilter = [[GPUImageAlphaBlendFilter alloc] init];
theblendFilter.mix = 1.0;
[self addFilter:theblendFilter];
[gammaFilter addTarget:theblendFilter atTextureLocation:1];
[triangleFilter addTarget:theblendFilter atTextureLocation:1];
[contrastFilter addTarget:theblendFilter atTextureLocation:1];
self.initialFilters = [NSArray arrayWithObjects:contrastFilter,gammaFilter, nil];
self.terminalFilter = theblendFilter;
return self;
}
#end
The intent is that when this group class was instantiated as follows:
filter= [[GPUImageParticleGroupTest alloc] init];
[filter addTarget:filterView];
[videoCamera addTarget:filter];
I would get the same result and have the same random triangles drawn over live video. The app does not crash but I no longer get any live video or triangles.
Where did I go wrong?
When dealing with a GPUImageFilterGroup that needs to blend the input image with something generated inside the group, there's only one other thing you need to deal with, and that's making sure targets get added to the blend in the right order.
Look at the GPUImageUnsharpMaskFilter as an example. It takes in input frames to that group, passes them through a blur filter, and then blends the output of that blur filter with the input image. To set this up, it uses the following code:
// First pass: apply a variable Gaussian blur
blurFilter = [[GPUImageGaussianBlurFilter alloc] init];
[self addFilter:blurFilter];
// Second pass: combine the blurred image with the original sharp one
unsharpMaskFilter = [[GPUImageTwoInputFilter alloc] initWithFragmentShaderFromString:kGPUImageUnsharpMaskFragmentShaderString];
[self addFilter:unsharpMaskFilter];
// Texture location 0 needs to be the sharp image for both the blur and the second stage processing
[blurFilter addTarget:unsharpMaskFilter atTextureLocation:1];
self.initialFilters = [NSArray arrayWithObjects:blurFilter, unsharpMaskFilter, nil];
self.terminalFilter = unsharpMaskFilter;
The one new method used here is -addTarget:atTextureLocation:, which makes sure that the input image is added as the first input for both the blur filter and the later blend. Also note that there are two initialFilters for this group, to make sure the input image goes to those two filters.
You'd need to do something similar with the above code, feeding input into the Harris corner detector as well as your blend. It should be reasonably straightforward to do based on the code you already have from the FilterShowcase example and the template of the GPUImageUnsharpMaskFilter group.
I am using Brad Larson's GPUImage Framework to add a UIImage element,i have successfully added the image but the main issue is that the image is getting stretched to the video's aspect ratio.
Here is my code:
GPUImageView *filterView = (GPUImageView *)self.view;
videoCamera = [[GPUImageVideoCamera alloc] initWithSessionPreset:AVCaptureSessionPreset640x480 cameraPosition:AVCaptureDevicePositionBack];
videoCamera.outputImageOrientation = UIInterfaceOrientationPortrait;
transformFilter=[[GPUImageTransformFilter alloc]init];
CGAffineTransform t=CGAffineTransformMakeScale(0.5, 0.5);
[(GPUImageTransformFilter *)filter setAffineTransform:t];
[videoCamera addTarget:transformFilter];
filter = [[GPUImageOverlayBlendFilter alloc] init];
[videoCamera addTarget:filter];
inputImage = [UIImage imageNamed:#"eye.png"];
sourcePicture = [[GPUImagePicture alloc] initWithImage:inputImage smoothlyScaleOutput:YES];
[sourcePicture forceProcessingAtSize:CGSizeMake(50, 50)];
[sourcePicture processImage];
[sourcePicture addTarget:filter];
[sourcePicture addTarget:transformFilter];
[filter addTarget:filterView];
[videoCamera startCameraCapture];
I have tried to use transform filter before blending the image,but it isn't getting scaled.
I want the image to appear at the center.How do i do it?
Thanks
You are on the right track, just have a few things out of place.
The following code will load an overlay image and apply a transformation to keep it at actual size. By default is will be centered over the video.
GPUImageView *filterView = (GPUImageView *)self.view;
videoCamera = [[GPUImageVideoCamera alloc] initWithSessionPreset:AVCaptureSessionPreset640x480 cameraPosition:AVCaptureDevicePositionBack];
videoCamera.outputImageOrientation = UIInterfaceOrientationPortrait;
filter = [[GPUImageOverlayBlendFilter alloc] init];
transformFilter = [[GPUImageTransformFilter alloc]init];
[videoCamera addTarget:filter];
[transformFilter addTarget:filter];
// setup overlay image
inputImage = [UIImage imageNamed:#"eye.png"];
sourcePicture = [[GPUImagePicture alloc] initWithImage:inputImage smoothlyScaleOutput:YES];
// determine the necessary scaling to keep image at actual size
CGFloat tx = inputImage.size.width / 480.0; // 480/640: based on video camera preset
CGFloat ty = inputImage.size.height / 640.0;
// apply transform to filter
CGAffineTransform t = CGAffineTransformMakeScale(tx, ty);
[(GPUImageTransformFilter *)transformFilter setAffineTransform:t];
//
[sourcePicture addTarget:filter];
[sourcePicture addTarget:transformFilter];
[sourcePicture processImage];
[filter addTarget:filterView];
[videoCamera startCameraCapture];
I am trying to implement brightness,contrast and exposure filters in a single view same as you see in iPhoto app. I have tried to put up group filter for doing the same. But it shows up a white screen instead of modified picture. Here is the code I applied.
GPUImageFilterGroup *groupFilter = [[GPUImageFilterGroup alloc]init];
GPUImageBrightnessFilter *brightnessFilter = [[GPUImageBrightnessFilter alloc]init];
[brightnessFilter setBrightness:brightnessValue];
GPUImageContrastFilter *contrastFilter = [[GPUImageContrastFilter alloc]init];
[contrastFilter setContrast:contrastValue];
GPUImageExposureFilter *exposureFilter =[[GPUImageExposureFilter alloc]init];
[exposureFilter setExposure:exposureValue];
[groupFilter addFilter:brightnessFilter];
[groupFilter addFilter:contrastFilter];
[groupFilter addFilter:exposureFilter];
GPUImagePicture *stillImage= [[GPUImagePicture alloc]initWithImage:self.imageToModify];
[stillImage addTarget:groupFilter];
[stillImage processImage];
previewPicture.image = [groupFilter imageFromCurrentlyProcessedOutputWithOrientation:self.imageToModify.imageOrientation];
I even tried to put each individually but still it shows up white image. Is the above code I am using correct?
I have also tried using GPUImageFilterPipeline instead of GPUImageFilterGroup but still having the same issue.
For the record, the image is a still image and not live feed.
you have missed some code statement for doing it which is as below.
[brightnessFilter addTarget: contrastFilter];
[contrastFilter addTarget: exposureFilter];
[(GPUImageFilterGroup *) groupFilter setInitialFilters:[NSArray arrayWithObject: brightnessFilter]];
[(GPUImageFilterGroup *) groupFilter setTerminalFilter:exposureFilter];
Thanks
Here is a full code to apply Brightness, Contrast and Saturation to video
i get the reference code from this link and i make change in that..
** in this code we use GPUImage Frameworks
1). in .h file
#import "GPUImage.h"
#interface ViewController : UIViewController
{
GPUImageMovie *movieFile;
GPUImageMovieWriter *movieWriter;
GPUImageUIElement *uiElementInput;
}
2). in .m file
- (void)editVideo
{
NSURL *sampleURL = [[NSBundle mainBundle] URLForResource:#"Sample Video" withExtension:#"m4v"];
movieFile = [[GPUImageMovie alloc] initWithURL:sampleURL];
movieFile.runBenchmark = YES;
movieFile.playAtActualSpeed = YES;
GPUImageFilterGroup *groupFilter = [[GPUImageFilterGroup alloc]init]; //1
GPUImageBrightnessFilter *brightnessFilter = [[GPUImageBrightnessFilter alloc] init]; //2
GPUImageContrastFilter *contrastFilter = [[GPUImageContrastFilter alloc] init]; //3
GPUImageSaturationFilter *saturationFilter = [[GPUImageSaturationFilter alloc] init]; //4
[(GPUImageBrightnessFilter*)brightnessFilter setBrightness:0.10]; // change value between -1.00 to 1.00
[(GPUImageContrastFilter*)contrastFilter setContrast:1.48]; // change value between 0.00 to 4.00
[(GPUImageSaturationFilter*)saturationFilter setSaturation:2.00]; //change value between 0.00 to 2.00
GPUImageAlphaBlendFilter *blendFilter = [[GPUImageAlphaBlendFilter alloc] init]; //5
blendFilter.mix = 0.0;
/* ************************************************** */
UIView *contentView = [[UIView alloc] initWithFrame:CGRectMake(0, 0, [[UIScreen mainScreen] bounds].size.width, [[UIScreen mainScreen] bounds].size.height-20)];
contentView.backgroundColor = [UIColor clearColor];
uiElementInput = [[GPUImageUIElement alloc] initWithView:contentView];
[brightnessFilter addTarget:contrastFilter];
[contrastFilter addTarget:saturationFilter];
[saturationFilter addTarget:blendFilter];
[uiElementInput addTarget:blendFilter];
[(GPUImageFilterGroup *) groupFilter setInitialFilters:[NSArray arrayWithObject: brightnessFilter]];
[(GPUImageFilterGroup *) groupFilter setTerminalFilter:blendFilter];
[movieFile addTarget:brightnessFilter];
[movieFile addTarget:contrastFilter];
[movieFile addTarget:saturationFilter];
// Only rotate the video for display, leave orientation the same for recording
GPUImageView *filterView = (GPUImageView *)vwVideo;
[brightnessFilter addTarget:filterView];
[contrastFilter addTarget:filterView];
[saturationFilter addTarget:filterView];
[blendFilter addTarget:filterView];
[brightnessFilter setFrameProcessingCompletionBlock:^(GPUImageOutput * filter, CMTime frameTime){
if (frameTime.value/frameTime.timescale == 2) {
[contentView viewWithTag:1].hidden = NO;
}
[uiElementInput update];
}];
[contrastFilter setFrameProcessingCompletionBlock:^(GPUImageOutput * filter, CMTime frameTime){
if (frameTime.value/frameTime.timescale == 2) {
[contentView viewWithTag:1].hidden = NO;
}
[uiElementInput update];
}];
[saturationFilter setFrameProcessingCompletionBlock:^(GPUImageOutput * filter, CMTime frameTime){
if (frameTime.value/frameTime.timescale == 2) {
[contentView viewWithTag:1].hidden = NO;
}
[uiElementInput update];
}];
//In addition to displaying to the screen, write out a processed version of the movie to disk
NSString *pathToMovie = [NSHomeDirectory() stringByAppendingPathComponent:#"Documents/Movie.m4v"];
unlink([pathToMovie UTF8String]); // If a file already exists, AVAssetWriter won't let you record new frames, so delete the old movie
movieWriter = [[GPUImageMovieWriter alloc] initWithMovieURL:[NSURL fileURLWithPath:pathToMovie] size:CGSizeMake(640.0, 480.0)];
[brightnessFilter addTarget:movieWriter];
[contrastFilter addTarget:movieWriter];
[blendFilter addTarget:movieWriter];
// Configure this for video from the movie file, where we want to preserve all video frames and audio samples
movieWriter.shouldPassthroughAudio = YES;
movieFile.audioEncodingTarget = movieWriter;
[movieFile enableSynchronizedEncodingUsingMovieWriter:movieWriter];
[movieWriter startRecording];
[movieFile startProcessing];
[movieWriter setCompletionBlock:^{
UISaveVideoAtPathToSavedPhotosAlbum(pathToMovie, nil, nil, nil);
}];
}
*Note :- Please run that demo example in device not in simulator for perfect result.
After process done you will get effected video in your device and for different effect you can change the value at setBrightness , setContrast and setSaturation (see in code)