I use AVFoundation framework to display video from camera.
The code how i use it is usual:
session = [[AVCaptureSession alloc] init] ;
...
captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
...
[cameraView.layer addSublayer:captureVideoPreviewLayer];
...
So i want to add zoom function to camera.
I have found 2 solutions how to implement zoom.
First : is to use CGAffineTransform:
cameraView.transform = CGAffineTransformMakeScale(x,y);
Second : is to put cameraView in the scroll view ,set up max and min scrolling and set this view as zooming view.
- (UIView *)viewForZoomingInScrollView:(UIScrollView *)scrollView
{
return cameraView;
}
What is the best way to make zooming better performance and quality? Are there any else solutions to make zoom? Maybe i missed some AVFoundation methods for zooming.
Thank you.
Well there is actually a GCFloat called setVideoScaleandCropFactor.
You can find the documentation here.
I'm not sure if this is just for still image output but I've been working on it and it does well if you set it to a gesture or a slider and let that control the float.
You can find a demo of it here.
Good stuff. Im trying to loop it so I can create a barcode scanner with a zoom. What im doing is rough though haha.
Related
I am working on scanning barcode ios App that uses AVFoundation
So i have created a square box with constraint using the interface builder. The square box is all good with the constraints. Perfectly fine.
i have this following code to add the avcapturelayer to the square box.
self.captureLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:self.captureSession];
[self.captureLayer setFrame:self.cameraPreviewView.layer.bounds];
[self.captureLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
[self.cameraPreviewView.layer addSublayer:self.captureLayer];
the layer follows the leading space from the square box constraint, but not with the trailing. The new added AVlayer goes off the screen(to the right) while the square box itself is all good. What am I missing here?
thanks!
I think you should try to set self.captureLayer bounds/position instead of frame ?
Cheers!
This might be happening if you are setting the frame in viewDidLoad. If so, try doing it in viewWillAppear:instead.
This may solve your problem
CGRect bounds=view.layer.bounds;
captureLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
captureLayer.bounds=bounds;
captureLayer.position=CGPointMake(CGRectGetMidX(bounds), CGRectGetMidY(bounds));
Or
as you are you are using AVLayerVideoGravityResizeAspectFill so it will go out of screen , you can use AVLayerVideoGravityResizeAspectFit instead.
you need to set clipToBound=YES; when using AVLayerVideoGravityResizeAspectFill (when using you View )
view.clipToBound=YES;
and than add Sublayer to view
view.layer.masksToBounds = YES;
I have this AVFoundation camera app of mine. The camera preview is the result of a filter, applied by didOutputSampleBuffer method.
When I setup the camera I am following what apple did on one of their sample codes (CIFunHouse):
// setting up the camera
CGRect bounds = [self.containerOpenGL bounds];
_eaglContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES3];
_videoPreviewView = [[GLKView alloc] initWithFrame:bounds
context:_eaglContext];
[self.containerOpenGL addSubview:_videoPreviewView];
[self.containerOpenGL sendSubviewToBack:_videoPreviewView];
id<MTLDevice> device = MTLCreateSystemDefaultDevice();
NSDictionary *options = #{kCIContextUseSoftwareRenderer : #(NO),
kCIContextPriorityRequestLow : #(YES),
kCIContextWorkingColorSpace : [NSNull null]};
_ciContext = [CIContext contextWithEAGLContext:_eaglContext options:options];
[_videoPreviewView bindDrawable];
_videoPreviewViewBounds = CGRectZero;
_videoPreviewViewBounds.size.width = _videoPreviewView.drawableWidth;
_videoPreviewViewBounds.size.height = _videoPreviewView.drawableHeight;
dispatch_async(dispatch_get_main_queue(), ^(void) {
CGAffineTransform transform = CGAffineTransformMakeRotation(M_PI);
_videoPreviewView.transform = transform;
_videoPreviewView.frame = bounds;
});
self.containerOpenGL is a full screen view and is constrained to the four corners of the screen. Autorotation is on.
But this is the problem...
When I setup the GLKView and self.ciContext it is created assuming the device is on a particular orientation. If the device is on a particular orientation and I run the application, the previewView will fit the entire self.containerOpenGL area but when I rotate the device the previewView will be out center.
I see that Apple code works perfectly and they don't use any constraints. They do not use any autorotation method, no didLayoutSubviews and when you rotate the device, running their code, everything rotates except the preview view. And worse than that, my previewView appears to rotate but not their's.
Is this black magic? How do I they do that?
They add their preview view to a uiwindow and that is why it does not rotate. I hope this answers the question. If not I will continue to look through their source code.
Quote from source code.
we make our video preview view a subview of the window, and send it to the back; this makes FHViewController's view (and its UI elements) on top of the video preview, and also makes video preview unaffected by device rotation
They also add this
_videoPreviewView.enableSetNeedsDisplay = NO;
This may keep it from responding as well
Edit: It appears that now the preview rotates and the UI does as well so to combat this you can add a second window and send it to the back and make the main window clear and add the previewView to the second window with a dummyViewController that tells it not to autorotate by overriding the appropriate method. This will allow the preview to not rotate but the UI to rotate.
I am trying to capture video with PBJVision.
I set up camera like this
vision.cameraMode = PBJCameraModeVideo;
vision.cameraOrientation = PBJCameraOrientationPortrait;
vision.outputFormat = PBJOutputFormatWidescreen;
And this produces output 1280x720 where 1280 is width.
Setting orientation to Landscape ROTATES the stream.
I have been trying to record video with GPUImage, and there I can
videoCamera = [[GPUImageVideoCamera alloc] initWithSessionPreset:AVCaptureSessionPreset1280x720
cameraPosition:AVCaptureDevicePositionBack];
videoCamera.outputImageOrientation = UIInterfaceOrientationPortrait;
_movieWriter = [[GPUImageMovieWriter alloc] initWithMovieURL:_movieURL size:CGSizeMake(720.0, 1280.0)];
So that i get vertical output.
I would like to achieve vertical output for PBJVision, because I experience problems with GPUImage writing video to disk. (I will make another question for that).
What method/property of AVFoundation is responsible for giving the vertical output instead of horizontal?
Sorry for the question, I have been googling 2 days - can't find the answer.
I was having the same issue. I changed the output format to vision.outputFormat = PBJOutputFormatPreset; and I'm getting that portrait/vertical output.
I am working programmatically an application for iOS based on a ViewController. I am trying to do so programmatically as I want to understand the underlying concepts.
I have created a subclass of UIImageView and initialized this using an image. In the initialization method I added also a second UIImageView as I would like to handle the two differently but be part of the same object. Ultimately I would like to be able to scale the object (and hence the 2 UIImages) according to the device screen resolution (e.g. if resolution is low then I will scale the two images by 50%). I want to do this because I would like to be able to implement a zoom in and zoom out feature as well as supporting multiple resolutions and screen layouts.
Additional information:
The two images have different size (500x500 pixels) and (350x350
pixels).
My questions are:
how do I position the second image exactly in the center of the first? (I used the center property of the main UIImage but I think I got it wrong.. I thought that the center was the exact center of the square but either I am using it incorrectly or there is something I am missing)
are there any negative side effects for using this approach (UIView subclass class containing an additional UIView?) (E.g. Is it going to create confusion when applying transformation algorithms? Does it reduce the randering speed? Or more simply is it a bad design pattern?)
I find it difficult to understand the positioning of the second image. See code snipped below, this is what I use:
CGRect innerButtonFrame = CGRectMake(self.center.x/2, self.center.y/2,innerButtonSelectedImage.size.width,innerButtonSelectedImage.size.height);
Taken from:
-(id) initWithImage:(UIImage *)image
{
if(self = [super initWithImage:image]){
//
self.userInteractionEnabled = true;
// Initialize gesture recognizers
UITapGestureRecognizer *tapInView = [[UITapGestureRecognizer alloc] initWithTarget:self action:#selector(tapInImageView:)];
[self addGestureRecognizer:tapInView];
UILongPressGestureRecognizer *longPress = [[UILongPressGestureRecognizer alloc] initWithTarget:self action:#selector(longPressInView:)];
[self addGestureRecognizer:longPress];
// Initialize labels
..
// Inner circle image
innerButtonView = [[UIImageView alloc] init];
innerButtonSelectedImage = [UIImage imageNamed:#"inner circle.png"];
CGRect innerButtonFrame = CGRectMake(self.center.x/2, self.center.y/2,innerButtonSelectedImage.size.width,innerButtonSelectedImage.size.height);
innerButtonView.frame = innerButtonFrame;
[innerButtonView setImage:innerButtonSelectedImage];
// Add additional ui components to view
[self addSubview:innerButtonView];
..
[self addSubview:descriptionLabel];
}
return self;
}
EDIT: This is how it looks like if I change the positioning code to the following:
CGRect innerButtonFrame = CGRectMake(0, 0,innerButtonSelectedImage.size.width,innerButtonSelectedImage.size.height);
innerButtonView.frame = innerButtonFrame;
I also don't understand why the image is bigger than the screen.. as the blue one should be 500x500 pixel wide and the screen of the iPhone 6 should be 1334 x 750.
How about:
CGRect innerButtonFrame = CGRectMake(0, 0, innerButtonSelectedImage.size.width,innerButtonSelectedImage.size.height);
innerButtonFrame.center = self.center;
If you need 500*500 circle then add the circle half means Replace 500*500 with 250*250 . And small circle replace 350*350 with 175*175 And solve your problem.
I hope your problem will solve..Enjoy
Thanks..
I have managed (with the help of this post) to open up a PLStaticWallpaperImageViewController from the PhotoLibrary private framework, which allows the direct setting of the wallpaper and lock screen (using same UI as the Photos app). Unfortunately, the image cropping/zooming features don't seem to work, as touches to the image view itself don't seem to be coming through (the main view is also not dismissed properly after the cancel/set buttons are touched, but this isn't so important).
I have an Xcode project demonstrating the wallpaper setting (can be run in simulator as well as a non-jailbroken device):
https://github.com/newenglander/WallpaperTest/
The code is quite basic, and involves a ViewController inheriting from PLStaticWallpaperImageViewController and implementing an init method similar to the following:
- (id)initWithCoder:(NSCoder *)aDecoder {
self = [self initWithUIImage:[UIImage imageWithContentsOfFile:#"/System/Library/WidgetResources /ibutton/white_i#2x.png"]];
self.allowsEditing = YES;
self.saveWallpaperData = YES;
return self;
}
(It will be necessary to allow access to the photo library after the first launch, and for some reason the popup for this comes up behind the app, rather than on top.)
Perhaps someone has insight as to why the cropping/zooming isn't working, or can give me an alternative way to set the wallpaper in an app (destined for Cydia rather than the App Store of course)?
Use this sample project, working very well.
Have inside camera control and custom layout, crop image when taken or after chose from your library, i used for my project and in very simple to customize.
https://github.com/yuvirajsinh/YCameraView
//---------- Answer improved----------//
I take a look on your project and i see 2 problem:
here you have 3 warning of semantic issue:
- (id)initWithUIImage:(id)arg1 cropRect:(struct CGRect { struct CGPoint { float x_1_1_1; float x_1_1_2; } x1; struct CGSize { float x_2_1_1; float x_2_1_2; } x2; })arg2;
in your ViewController.m you setting to get the image from where?
- (id)initWithCoder:(NSCoder *)aDecoder
{
// black_i
//what directory is this?
self = [self initWithUIImage:[UIImage imageWithContentsOfFile:#"/System/Library/WidgetResources/ibutton/white_i#2x.png"]];
//--------------------
self.allowsEditing = YES;
self.saveWallpaperData = YES;
return self;
}
i try to remove your
- (id)initWithUIImage:(id)arg1 cropRect:(struct CGRect { struct CGPoint { float x_1_1_1; float x_1_1_2; } x1; struct CGSize { float x_2_1_1; float x_2_1_2; } x2; })arg2;
change IMG directory in to:
self = [self initWithUIImage:[UIImage imageNamed:#"myImage.png"]];
and all working well but can't crop image, with my git hub YCameraView you have first understand how it work CROPPING function if you want to use crop or more simple, you have to create a fullScreen UICameraPicker allow user to get from camera or from library and allow the editing in cameraPicker then you can load a new picture in your View like this
self = [self initWithUIImage:[UIImage imageNamed:imageSelected.image]];
for a dismiss view, you can't because is a full app allow user to setUp background wallpaper and you can't terminate the app to see a SpringBoard, you have to create first view > picker > detail view with settings for a Home and LockScreen > then dismiss and come back to a first view.
PS: I think in your project to enable editing direct in a view you have to improve your code with a pinch and pan gesture on the UIView
Hope this help you!