Video-capture vertical output iOS - ios

I am trying to capture video with PBJVision.
I set up camera like this
vision.cameraMode = PBJCameraModeVideo;
vision.cameraOrientation = PBJCameraOrientationPortrait;
vision.outputFormat = PBJOutputFormatWidescreen;
And this produces output 1280x720 where 1280 is width.
Setting orientation to Landscape ROTATES the stream.
I have been trying to record video with GPUImage, and there I can
videoCamera = [[GPUImageVideoCamera alloc] initWithSessionPreset:AVCaptureSessionPreset1280x720
cameraPosition:AVCaptureDevicePositionBack];
videoCamera.outputImageOrientation = UIInterfaceOrientationPortrait;
_movieWriter = [[GPUImageMovieWriter alloc] initWithMovieURL:_movieURL size:CGSizeMake(720.0, 1280.0)];
So that i get vertical output.
I would like to achieve vertical output for PBJVision, because I experience problems with GPUImage writing video to disk. (I will make another question for that).
What method/property of AVFoundation is responsible for giving the vertical output instead of horizontal?
Sorry for the question, I have been googling 2 days - can't find the answer.

I was having the same issue. I changed the output format to vision.outputFormat = PBJOutputFormatPreset; and I'm getting that portrait/vertical output.

Related

RTCVideoTrack shows stretched WebRTC

I am using core WebRTC framework and rendering my local stream in IPhone full screen mode. Unfortunately, my video shows stretched, doesn't show like video view in camera app.
I tried to add aspect ratio in RTCMediaConstraints and also used adaptOutputFormatToWidth method to fix the output.
NSDictionary* mandatoryConstraints;
/* want to calculate aspect ratio dynamically */
NSString *aspectRatio = [NSString stringWithFormat:#"%f",(double)4/3];
if (aspectRatio) {
mandatoryConstraints = #{ kRTCMediaConstraintsMaxAspectRatio:
aspectRatio};
}
RTCMediaConstraints *cameraConstraints = [RTCMediaConstraints alloc];
cameraConstraints = [cameraConstraints initWithMandatoryConstraints:mandatoryConstraints optionalConstraints:nil];
RTCAVFoundationVideoSource *localVideoSource = [peerFactory avFoundationVideoSourceWithConstraints:mediaConstraint];
[localVideoSource adaptOutputFormatToWidth:devicewidth:devicewidth fps:30];
In below link, the difference between camera video view and my app call video view is shown
https://drive.google.com/file/d/1HN3KQcJphtC3VzJjlI4Hm-D3u2E6qmdQ/view?usp=sharing
I believe you are rendering your video in RTCEAGLVideoView, which require adjustment for size, you can use RTCMTLVideoView in place of RTCEAGLVideoView.
and if you want to use RTCEAGLVideoView, use RTCEAGLVideoViewDelegate method.
- (void)videoView:(RTCEAGLVideoView *)videoView didChangeVideoSize:(CGSize)size;
this method will give you correct size of the video.
(For Swift) -> Use RTCMTLVideoView and set videoContentMode
#if arch(arm64)
let renderer = RTCMTLVideoView(frame: videoView.frame)
renderer.videoContentMode = .scaleAspectFill
#else
let renderer = RTCEAGLVideoView(frame: videoView.frame)
#endif

How can I add core image filters to OpenTok video?

When I publish a stream on iOS, TokBox uses the default camera. Is there a way to add live filters to the publisher?
I just want some easy, sample code on how to create a filter and attach it to the opentok publisher object (OTVideoCapture).
Or, if that's not the right way to do it...attaching the filter on the subscriber side works too.
How can this be done easily?
As I understand you want to apply filters before sending video data and also in real time. There is no easy source code here but I could tell you path.
For real time video filters you could use GPUImage framework. It has ready to use camera GPUImageVideoCamera class. So you need to create class which implements GPUImageInput (it is target in terms of GPUImage) which will produce OTVideoFrame frame from input and add it to pipeline.
Something like this:
videoCamera = [[GPUImageVideoCamera alloc] initWithSessionPreset:AVCaptureSessionPreset640x480 cameraPosition:AVCaptureDevicePositionBack];
videoCamera.outputImageOrientation = UIInterfaceOrientationPortrait;
videoCamera.horizontallyMirrorFrontFacingCamera = NO;
videoCamera.horizontallyMirrorRearFacingCamera = NO;
// filter
filter = [[GPUImageSepiaFilter alloc] init];
[videoCamera addTarget:filter];
// frame producer for OTVideoCapture
frameProducer = [[FrameProducer alloc] init];
[filter addTarget:frameProducer];
// camera view to show what we record
[filter addTarget:filterView];
Also you need custom implementation of OTVideoCapture protocol for OpenTok itself. You could use TBExampleVideoCapture from Lets-Build-OTPublisher sample as a start point. You need to replace camera code with above GPUImageVideoCamera camera code to use filters in real time.

GPUImage initWithAsset only shows last frame

I'm trying to initialize GPUImageMovie with an AVAsset, which is an AVMutableComposition. However, it will only show the last frame after it has run all benchmark frames. If I use initWithURL it works fine, but I don't want to generate a file.
I've also tried creating an AVAsset from a file URL with the same result.
_movieFile = [[GPUImageMovie alloc] initWithAsset:_myAsset];
_movieFile.runBenchmark = YES;
_movieFile.playAtActualSpeed = YES;
_filterView = [[GPUImageView alloc] initWithFrame:[self windowSize]];
Any ideas why this doesn't work?
Solved it. What I did was creating a GPUImageMovieWriter but not actually calling [movieWriter startRecording]

GPUImage Harris Corner Detection on an existing UIImage gives a black screen output

I've successfully added a crosshair generator and harris corner detection filter onto a GPUImageStillCamera output, as well as on live video from GPUImageVideoCamera.
I'm now trying to get this working on a photo set on a UIImageView, but continually get a black screen as the output. I have been reading the issues listed on GitHub against Brad Larson's GPUImage project, but they seemed to be more in relation to blend type filters, and following the suggestions there I still face the same problem.
I've tried altering every line of code to follow various examples I have seen, and to follow Brad's example code in the Filter demo projects, but the result is always the same.
My current code is, once I've taken a photo (which I check to make sure it is not just a black photo at this point):
GPUImagePicture *stillImageSource = [[GPUImagePicture alloc] initWithImage:self.photoView.image];
GPUImageHarrisCornerDetectionFilter *cornerFilter1 = [[GPUImageHarrisCornerDetectionFilter alloc] init];
[cornerFilter1 setThreshold:0.1f];
[cornerFilter1 forceProcessingAtSize:self.photoView.frame.size];
GPUImageCrosshairGenerator *crossGen = [[GPUImageCrosshairGenerator alloc] init];
crossGen.crosshairWidth = 15.0;
[crossGen forceProcessingAtSize:self.photoView.frame.size];
[cornerFilter1 setCornersDetectedBlock:^(GLfloat* cornerArray, NSUInteger cornersDetected, CMTime frameTime, BOOL endUpdating)
{
[crossGen renderCrosshairsFromArray:cornerArray count:cornersDetected frameTime:frameTime];
}];
[stillImageSource addTarget:crossGen];
[crossGen addTarget:cornerFilter1];
[crossGen prepareForImageCapture];
[stillImageSource processImage];
UIImage *currentFilteredImage = [crossGen imageFromCurrentlyProcessedOutput];
UIImageWriteToSavedPhotosAlbum(currentFilteredImage, nil, nil, nil);
[self.photoView setImage:currentFilteredImage];
I've tried prepareForImageCapture on both filters, on neither, adding the two targets in the opposite order, calling imageFromCurrentlyProcessedOutput on either filter, I've tried it without the crosshair generator, I've tried using local variables and variables declared in the .h file. I've tried with and without forceProcessingAtSize on each of the filters.
I can't think of anything else that I haven't tried to get the output. The app is running on iPhone 7.0, in Xcode 5.0.1. The standard filters work on the photo, e.g. the simple GPUImageSobelEdgeDetectionFilter included in the SimpleImageFilter test app.
Any suggestions? I am saving the output to the camera roll so I can check it's not just me failing to display it correctly. I suspect it's a stupid mistake somewhere but am at a loss as to what else to try now.
Thanks.
Edited to add: the corner detection is definitely working, as depending on the threshold I set, it returns between 6 and 511 corners.
The problem with the above is that you're not chaining filters in the proper order. The Harris corner detector takes in an input image, finds the corners within it, and provides the callback block to return those corners. The GPUImageCrosshairGenerator takes in those points and creates a visual representation of the corners.
What you have in the above code is image->GPUImageCrosshairGenerator-> GPUImageHarrisCornerDetectionFilter, which won't really do anything.
The code in your answer does go directly from the image to the GPUImageHarrisCornerDetectionFilter, but you don't want to use the image output from that. As you saw, it produces an image where the corners are identified by white dots on a black background. Instead, use the callback block, which processes that and returns an array of normalized corner coordinates for you to use.
If you need these to be visible, you could then take that array of coordinates and feed it into the GPUImageCrosshairGenerator to create visible crosshairs, but that image will need to be blended with your original image to make any sense. This is what I do in the FilterShowcase example.
I appear to have fixed the problem, with trying different variations again and now the returned image is black but there are white dots in the locations of the found corners. I removed the GPUImageCrosshairGenerator altogether. The code that got this working was:
GPUImagePicture *stillImageSource = [[GPUImagePicture alloc] initWithImage:self.photoView.image];
GPUImageHarrisCornerDetectionFilter *cornerFilter1 = [[GPUImageHarrisCornerDetectionFilter alloc] init];
[cornerFilter1 setThreshold:0.1f];
[cornerFilter1 forceProcessingAtSize:self.photoView.frame.size];
[stillImageSource addTarget:cornerFilter1];
[cornerFilter1 prepareForImageCapture];
[stillImageSource processImage];
UIImage *currentFilteredImage = [cornerFilter1 imageFromCurrentlyProcessedOutput];
UIImageWriteToSavedPhotosAlbum(currentFilteredImage, nil, nil, nil);
[self.photoView setImage:currentFilteredImage];
I do not need to add the crosshairs for the purpose of my app - I simply want to parse the locations of the corners to provide some cropping, but I required the dots to be visible to check the corners were being detected correctly. I'm not sure if the white dots on black are the expected outcome of this filter, but I presume so.
Updated code for Swift 2:
let stillImageSource: GPUImagePicture = GPUImagePicture(image: image)
let cornerFilter1: GPUImageHarrisCornerDetectionFilter = GPUImageHarrisCornerDetectionFilter()
cornerFilter1.threshold = 0.1
cornerFilter1.forceProcessingAtSize(image.size)
stillImageSource.addTarget(cornerFilter1)
stillImageSource.processImage()
let tmp: UIImage = cornerFilter1.imageByFilteringImage(image)

AVFoundation camera zoom

I use AVFoundation framework to display video from camera.
The code how i use it is usual:
session = [[AVCaptureSession alloc] init] ;
...
captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
...
[cameraView.layer addSublayer:captureVideoPreviewLayer];
...
So i want to add zoom function to camera.
I have found 2 solutions how to implement zoom.
First : is to use CGAffineTransform:
cameraView.transform = CGAffineTransformMakeScale(x,y);
Second : is to put cameraView in the scroll view ,set up max and min scrolling and set this view as zooming view.
- (UIView *)viewForZoomingInScrollView:(UIScrollView *)scrollView
{
return cameraView;
}
What is the best way to make zooming better performance and quality? Are there any else solutions to make zoom? Maybe i missed some AVFoundation methods for zooming.
Thank you.
Well there is actually a GCFloat called setVideoScaleandCropFactor.
You can find the documentation here.
I'm not sure if this is just for still image output but I've been working on it and it does well if you set it to a gesture or a slider and let that control the float.
You can find a demo of it here.
Good stuff. Im trying to loop it so I can create a barcode scanner with a zoom. What im doing is rough though haha.

Resources