I was wondering is there any way to force replaykit to record only a part of the screen in square mode? The current API seems to record the whole screen!
ReplayKit records everything on the screen exempt system prompts and dialogs.
You can however overlay another UIWindow on top of your main one and apply a mask to an empty UIView to hide parts of the screen and force a square recording.
The frame ratio of the final recording will still be equal to the screen though.
_overlayWindow = [[UIWindow alloc] initWithFrame:UIScreen.mainScreen.bounds]; //Full sized window
_overlayWindow.backgroundColor = [UIColor clearColor];
_overlayWindow.userInteractionEnabled = false;
_overlayWindow.hidden = NO;
UIView *maskedView = [[UIView alloc] initWithFrame:_overlayWindow.bounds];
// Create a mask layer
CAShapeLayer *maskLayer = [[CAShapeLayer alloc] init];
CGRect maskRect = CGRectMake(0, 0, 200, 200);
// Create a path with the rectangle in it.
CGPathRef path = CGPathCreateWithRect(maskRect, NULL);
maskLayer.path = path;
// Set the mask of the view.
maskedView.layer.mask = maskLayer;
[_overlayWindow addSubview:maskedView];
At the moment ReplayKit framework does not provide customisation of Screen Recording in terms of Screen size. So you have to record whole screen of GamePlay.
Related
Users need to be able to take photo of their ID. I need to add a blue frame to camera view as a guide. Guide should have same aspect ratio on all device sizes and fit a label with instructions. Can I accomplish this using UIImagePicker?
Here is some incomplete code. Thanks for any help.
UIImageView *overlayImage = [[UIImageView alloc] initWithImage:[UIImage imageNamed:#""]];
CGRect overlayRect = CGRectMake(0, 0, self.view.frame.size.height - 16, self.view.frame.size.width - 16);
[overlayImage setFrame:overlayRect];
[self.imagePicker setCameraOverlayView:overlayImage];
Use AVCaptureDevice, AVCaptureSession, AVCaptureVideoPreviewLayer and AVCapturePhotoOutput.
Set AVCaptureDeviceInput as input of capture session, and photo output as output of capture session. Initialize AVCaptureVideoPreviewLayer with AVCaptureSession and add to your view.layer thought addSublayer. You can use UIViewController from storyboard or programmatically instantiated controller, add picture of overlay or cornered view.layer.borderWidth or UIBezierPath. Set up controller as AVCapturePhotoCaptureDelegate, add delegate methods. Use capturePhoto(with:delegate:) method. Enjoy.
I want to add metric between two point sets on face to use it for object detection in digital images, we restrict it to two dimensions as shown Below
I could recoginze the face features as shown below image using:
-(void)markFaces:(UIImageView *)facePicture
{
// draw a CI image with the previously loaded face detection picture
CIImage* image = [CIImage imageWithCGImage:facePicture.image.CGImage];
// create a face detector - since speed is not an issue we'll use a high accuracy
// detector
CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeFace
context:nil options: [NSDictionary dictionaryWithObject:CIDetectorAccuracyHigh forKey:CIDetectorAccuracy]];
// create an array containing all the detected faces from the detector
NSArray* features = [detector featuresInImage:image];
// we'll iterate through every detected face. CIFaceFeature provides us
// with the width for the entire face, and the coordinates of each eye
// and the mouth if detected. Also provided are BOOL's for the eye's and
// mouth so we can check if they already exist.
for(CIFaceFeature* faceFeature in features)
{
// get the width of the face
CGFloat faceWidth = faceFeature.bounds.size.width;
// create a UIView using the bounds of the face
UIView* faceView = [[UIView alloc] initWithFrame:faceFeature.bounds];
// add a border around the newly created UIView
faceView.layer.borderWidth = 1;
faceView.layer.borderColor = [[UIColor redColor] CGColor];
// add the new view to create a box around the face
[self.view addSubview:faceView];
if(faceFeature.hasLeftEyePosition)
{
// create a UIView with a size based on the width of the face
UIView* leftEyeView = [[UIView alloc] initWithFrame:CGRectMake(faceFeature.leftEyePosition.x-faceWidth*0.15, faceFeature.leftEyePosition.y-faceWidth*0.15, faceWidth*0.3, faceWidth*0.3)];
// change the background color of the eye view
[leftEyeView setBackgroundColor:[[UIColor blueColor] colorWithAlphaComponent:0.3]];
// set the position of the leftEyeView based on the face
[leftEyeView setCenter:faceFeature.leftEyePosition];
// round the corners
leftEyeView.layer.cornerRadius = faceWidth*0.15;
// add the view to the window
[self.view addSubview:leftEyeView];
}
if(faceFeature.hasRightEyePosition)
{
// create a UIView with a size based on the width of the face
UIView* leftEye = [[UIView alloc] initWithFrame:CGRectMake(faceFeature.rightEyePosition.x-faceWidth*0.15, faceFeature.rightEyePosition.y-faceWidth*0.15, faceWidth*0.3, faceWidth*0.3)];
// change the background color of the eye view
[leftEye setBackgroundColor:[[UIColor blueColor] colorWithAlphaComponent:0.3]];
// set the position of the rightEyeView based on the face
[leftEye setCenter:faceFeature.rightEyePosition];
// round the corners
leftEye.layer.cornerRadius = faceWidth*0.15;
// add the new view to the window
[self.view addSubview:leftEye];
}
if(faceFeature.hasMouthPosition)
{
// create a UIView with a size based on the width of the face
UIView* mouth = [[UIView alloc] initWithFrame:CGRectMake(faceFeature.mouthPosition.x-faceWidth*0.2, faceFeature.mouthPosition.y-faceWidth*0.2, faceWidth*0.4, faceWidth*0.4)];
// change the background color for the mouth to green
[mouth setBackgroundColor:[[UIColor greenColor] colorWithAlphaComponent:0.3]];
// set the position of the mouthView based on the face
[mouth setCenter:faceFeature.mouthPosition];
// round the corners
mouth.layer.cornerRadius = faceWidth*0.2;
// add the new view to the window
[self.view addSubview:mouth];
}
}
}
-(void)faceDetector
{
// Load the picture for face detection
//UIImageView* image = [[UIImageView alloc] initWithImage:[UIImage imageNamed:#"facedetectionpic.jpg"]];
UIImageView* image = [[UIImageView alloc] initWithImage:[UIImage imageNamed:#"timthumb.png"]];
// Draw the face detection image
[self.view addSubview:image];
// Execute the method used to markFaces in background
[self performSelectorInBackground:#selector(markFaces:) withObject:image];
// flip image on y-axis to match coordinate system used by core image
[image setTransform:CGAffineTransformMakeScale(1, -1)];
// flip the entire window to make everything right side up
[self.view setTransform:CGAffineTransformMakeScale(1, -1)];
}
Now I want to add points to locate the reference of eyes, nose etc before uploading to database. Later these images can be compared to existing images based on these metric point location as shown below
I have referred This Link but could not implement this ..If anyone knows this please Suggest me
Thank you
I am afraid this is not straightforward. Looking at the documentation CIDetector does not include detectors for additional facial landmarks. You will need to train your own on a set of manually annotated images. There are a couple open source projects around to do that. A very good one (accurate and fast) is dlib: http://blog.dlib.net/2014/08/real-time-face-pose-estimation.html
I want to create a black UIView with transparent circles.
I think about create one view (with black color and transparence 50%), and add multiple circles inside of it, but I don't know how to set the transparence for each. I know how to create a circle View (an example: how to draw a custom uiview that is just a circle iphone-app).
I want to do is something like iShowcase library but with multiple dots:
Any clue? thanks.
SOLVED
I took a look to the code of iShowcase library and I solved my probblem. now, I am working in a library based in iShowcase library.
I will post here when I finish it.
Please have a look of below link hope this will helpful for you.
Link : Here is Answer to set shadow in your view.
Use alpha for your circleView. As in your link example,then add as subviews in yourmainview:
UIView *circleView = [[UIView alloc] initWithFrame:CGRectMake(10,20,100,100)];
circleView.alpha = 0.5;
circleView.layer.cornerRadius = 50;
circleView.backgroundColor = [UIColor whiteColor];
[yourmainview addSubview: circleView];
Btw in your picture I think white circles have 100% alpha. You can use individual alpha for each circleView, or use a randomizer :)
As for updated example why don't you add more buttons and showcase in your h file, synthesize them and use multiple instances .... showcase setupShowcaseForTarget:btn_custom_1 title:#"title" details:#"other"]; ? I think you should modify main classes, becouse what you want are different containerView for multiple views [circles].
Using modifyed iShowcase.m [- (void) calculateRegion], and different views as containers, I was able to make something like: http://tinypic.com/view.php?pic=2iwao6&s=8#.VLPTRqYsRE8 So the answer is: use custom views for multiple showcase [ex [showcase2 setContainerView:self.view2];], then custom frame for each showcase [ showcase2.frame = CGRectMake(0,0,100,100);] I don;t habe time to fine tuning the example, but yes, you can achieve desired result...
I finally solved my question inspired by iShowCase library I did this simple class and Upload to github.
https://github.com/tato469/FVEasyShowCase
Simplest what you can do is to have your main view (black 50% transparant) and add shapes to the mask layer of that.
So basically:
//Set up your main view.
UIView* mainView = [UIView new];
mainView.backgroundColor = [UIColor blackColor];
mainView.alpha = 0.5;
UIView* circle1 = [YourCircleClassHere new];
UIView* circle2 = [YourCircleClassHere new];
UIView* circle3 = [YourCircleClassHere new];
UIView* container = [UIView new];
[UIView addSubview:circle1];
[UIView addSubview:circle2];
[UIView addSubview:circle3];
//Make a new layer to put images in to mask out
CALayer* maskLayer = [CALAyer layer];
//Assign the mask view to the contents layer.
maskLayer.contents = (id)container;
//This will set the mask layer to the top left corner.
maskLayer.frame = CGRectMake(0,0,container.frame.size.width,container.frame.size.height);
//Lastly you assign the layer to the mask layer of the main view.
mainView.layer.mask = maskLayer;
//Applies basically the same as clipToBounds, but a bit reversed..
mainView.layer.mask = true/false;
On a sidenote:
I achieved this with images "contents = (id) [UIImage CGImage]", but I'm sure it should work with UIViews as well.
Also mind some mistakes, since I just wrote this from my mind, also I didn't test this out.. So keep me updated if it works/!works ^_^
I'm trying to create a "grip" for a vertical bar in my app, to show the user that they can swipe on the view. I'd like to just use a small version of the image, which has the grip and 1 px above and below it which is the background for the rest of the bar. So:
ssss ssss
gggg becomes ssss
ssss gggg
ssss
ssss
The method resizableImageWithCapInsets: allows you to tell iOS not adjust the outside edge, but is there a way to tell it to not adjust the interior and stretch with the exterior?
I've tried specifying a oversized cap inset, UIEdgeInsetsMake(29.0f, 0.0f, 29.0f, 0.0f) and that resulted in the grip being at the top of the image's frame, but otherwise is the correct stretching behavior I'm looking for. I just need to have the grip part centered in the frame.
Full code for those interested (this results in the grip being repeated across the whole length):
gripBar = [[UIImageView alloc] initWithImage:[[UIImage imageNamed:#"grip"] resizableImageWithCapInsets:UIEdgeInsetsMake(1.0f, 0.0f, 1.0f, 0.0f)]];
gripBar.frame = CGRectMake(0, 0, gripWidth, fullHeight);
[view gripBar];
Update: working code based on chosen answer
#implementation GripBar
- (id)init {
self = [super init];
if (self) {
UIImageView *bar = [[UIImageView alloc] initWithImage:[[UIImage imageNamed:#"gripback"] resizableImageWithCapInsets:UIEdgeInsetsMake(0.0f, 0.0f, 1.0f, 0.0f)]];
bar.autoresizingMask = UIViewAutoresizingFlexibleHeight;
[self addSubview:bar];
UIImageView *grip = [[UIImageView alloc] initWithImage:[UIImage imageNamed:#"grip"]];
grip.contentMode = UIViewContentModeCenter;
grip.autoresizingMask = UIViewAutoresizingFlexibleTopMargin | UIViewAutoresizingFlexibleBottomMargin;
[self addSubview:grip];
}
return self;
}
#end
The easiest way to do this is probably to make your grip part be a (small-height, but full-width) subview of a larger bar view whose background stretches or tiles. You can keep the grip part centered in your larger view by, for example, autoresize masks.
I am trying to implement my own map engine by using CATiledLayer + UIScrollView.
In drawLayer:inContext: method of my implementation, if I have a certain tile image needed for the current bounding box, I immediately draw it in the context.
However, when I do not have one available in the local cache data structure, the tile image is asynchronously requested/downloaded from a tile server, and not draw anything in the context.
The problem is, when I don't draw anything in the context, that part of the view is shown as a blank tile. And the expected behavior is to show the zoom-in scaled tile view from the previous zoom level.
If you guys have faced any similar problem and found any solution for this, please let me know.
You have to setNeedsDisplayInRect: for the tile as soon as you have the data. You have to live with it being blank until the tile is available because you have no way to affect which tiles CATiledLayer is creating.
I do the same, blocking the thread until the tile has been downloaded. The performance is good, it runs smoothly. I'm using a queue to store every tile request, so I can also cancel those tile requests that are not useful anymore.
To do so, use a lock to stop the thread just after you launch your async tile request, and unlock it as soon as you have your tile cached.
Sounds that good to you? It worked for me!
Your CATiledLayer should be providing this tile from the previous zoom level as you expect. What are your levelsOfDetail and levelsOfDetailBias set to for the tiled layer?
You have to call setNeedsDisplay or setNeedsDisplayInRect: But the problem is if you call this then it will redraw every tile in the scrollView. So try using subclass of UIView instead of CATiledLayer subclass, and implement TiledView (subclass of UIView) like this,
+ (Class) layerClass {
return [CATiledLayer class];
}
-(void)drawRect:(CGRect)r {
CGRect tile = r;
int x = tile.origin.x/TILESIZE;
int y = tile.origin.y/TILESIZE;
NSString *tileName = [NSString stringWithFormat:#"Shed_1000_%i_%i", x, y];
NSString *path =
[[NSBundle mainBundle] pathForResource:tileName ofType:#"png"];
UIImage *image = [UIImage imageWithContentsOfFile:path];
[image drawAtPoint:tile.origin];
// uncomment the following to see the tile boundaries
/*
UIBezierPath* bp = [UIBezierPath bezierPathWithRect: r];
[[UIColor whiteColor] setStroke];
[bp stroke];
*/
}
and
for scrollView,
UIScrollView* sv = [[UIScrollView alloc] initWithFrame:
[[UIScreen mainScreen] applicationFrame]];
sv.backgroundColor = [UIColor whiteColor];
self.view = sv;
CGRect f = CGRectMake(0,0,3*TILESIZE,3*TILESIZE);
TiledView* content = [[TiledView alloc] initWithFrame:f];
float tsz = TILESIZE * content.layer.contentsScale;
[(CATiledLayer*)content.layer setTileSize: CGSizeMake(tsz, tsz)];
[self.view addSubview:content];
[sv setContentSize: f.size];