iOS Floating Video Window like Youtube App - ios

Does anyone know of any existing library, or any techniques on how to get the same effect as is found on the Youtube App.
The video can be "minimised" and hovers at the bottom of the screen - which can then be swiped to close or touched to re-maximised.
See:
Video Playing Normally: https://www.dropbox.com/s/o8c1ntfkkp4pc4q/2014-06-07%2001.19.20.png
Video Minimized: https://www.dropbox.com/s/w0syp3infu21g08/2014-06-07%2001.19.27.png
(Notice how the video is now in a small floating window on the bottom right of the screen).
Anyone have any idea how this was achieved, and if there are any existing tutorials or libraries that can be used to get this same effect?

It sounded fun, so I looked at youtube. The video looks like it plays in a 16:9 box at the top, with a "see also" list below. When user minimizes the video, the player drops to the lower right corner along with the "see also" view. At the same time, that "see also" view fades to transparent.
1) Setup the views like that and created outlets. Here's what it looks like in IB. (Note that the two containers are siblings)
2) Give the video view a swipe up and swipe down gesture recognizer:
#interface ViewController ()
#property (weak, nonatomic) IBOutlet UIView *tallMpContainer;
#property (weak, nonatomic) IBOutlet UIView *mpContainer;
#end
#implementation ViewController
- (void)viewDidLoad
{
[super viewDidLoad];
UISwipeGestureRecognizer *swipeDown = [[UISwipeGestureRecognizer alloc] initWithTarget:self action:#selector(swipeDown:)];
UISwipeGestureRecognizer *swipeUp = [[UISwipeGestureRecognizer alloc] initWithTarget:self action:#selector(swipeUp:)];
swipeUp.direction = UISwipeGestureRecognizerDirectionUp;
swipeDown.direction = UISwipeGestureRecognizerDirectionDown;
[self.mpContainer addGestureRecognizer:swipeUp];
[self.mpContainer addGestureRecognizer:swipeDown];
}
- (void)swipeDown:(UIGestureRecognizer *)gr {
[self minimizeMp:YES animated:YES];
}
- (void)swipeUp:(UIGestureRecognizer *)gr {
[self minimizeMp:NO animated:YES];
}
3) And then a method to know about the current state, and change the current state.
- (BOOL)mpIsMinimized {
return self.tallMpContainer.frame.origin.y > 0;
}
- (void)minimizeMp:(BOOL)minimized animated:(BOOL)animated {
if ([self mpIsMinimized] == minimized) return;
CGRect tallContainerFrame, containerFrame;
CGFloat tallContainerAlpha;
if (minimized) {
CGFloat mpWidth = 160;
CGFloat mpHeight = 90; // 160:90 == 16:9
CGFloat x = 320-mpWidth;
CGFloat y = self.view.bounds.size.height - mpHeight;
tallContainerFrame = CGRectMake(x, y, 320, self.view.bounds.size.height);
containerFrame = CGRectMake(x, y, mpWidth, mpHeight);
tallContainerAlpha = 0.0;
} else {
tallContainerFrame = self.view.bounds;
containerFrame = CGRectMake(0, 0, 320, 180);
tallContainerAlpha = 1.0;
}
NSTimeInterval duration = (animated)? 0.5 : 0.0;
[UIView animateWithDuration:duration animations:^{
self.tallMpContainer.frame = tallContainerFrame;
self.mpContainer.frame = containerFrame;
self.tallMpContainer.alpha = tallContainerAlpha;
}];
}
I didn't add video to this project, but it should just drop in. Make the mpContainer the parent view of the MPMoviePlayerController's view and it should look pretty cool.

Use TFSwipeShrink and customize code for your project.
hope to help you.

Update new framwork FWDraggableSwipePlayer for drag uiview like YouTube app.
hope to help you.

This is a swift 3 version for the answer #danh had provided earlier.
https://stackoverflow.com/a/24107949/1211470
import UIKit
class ViewController: UIViewController {
#IBOutlet weak var tallMpContainer: UIView!
#IBOutlet weak var mpContainer: UIView!
var swipeDown: UISwipeGestureRecognizer?
var swipeUp: UISwipeGestureRecognizer?
override func viewDidLoad() {
super.viewDidLoad()
swipeDown = UISwipeGestureRecognizer(target: self, action: #selector(swipeDownAction))
swipeUp = UISwipeGestureRecognizer(target: self, action: #selector(swipeUpAction))
swipeDown?.direction = .down
swipeUp?.direction = .up
self.mpContainer.addGestureRecognizer(swipeDown!)
self.mpContainer.addGestureRecognizer(swipeUp!)
}
#objc func swipeDownAction() {
minimizeWindow(minimized: true, animated: true)
}
#objc func swipeUpAction() {
minimizeWindow(minimized: false, animated: true)
}
func isMinimized() -> Bool {
return CGFloat((self.tallMpContainer?.frame.origin.y)!) > CGFloat(20)
}
func minimizeWindow(minimized: Bool, animated: Bool) {
if isMinimized() == minimized {
return
}
var tallContainerFrame: CGRect
var containerFrame: CGRect
var tallContainerAlpha: CGFloat
if minimized == true {
let mpWidth: CGFloat = 160
let mpHeight: CGFloat = 90
let x: CGFloat = 320-mpWidth
let y: CGFloat = self.view.bounds.size.height - mpHeight;
tallContainerFrame = CGRect(x: x, y: y, width: 320, height: self.view.bounds.size.height)
containerFrame = CGRect(x: x, y: y, width: mpWidth, height: mpHeight)
tallContainerAlpha = 0.0
} else {
tallContainerFrame = self.view.bounds
containerFrame = CGRect(x: 0, y: 0, width: 320, height: 180)
tallContainerAlpha = 1.0
}
let duration: TimeInterval = (animated) ? 0.5 : 0.0
UIView.animate(withDuration: duration, animations: {
self.tallMpContainer.frame = tallContainerFrame
self.mpContainer.frame = containerFrame
self.tallMpContainer.alpha = tallContainerAlpha
})
}
}

Related

How to change volume programmatically on iOS 11.4

Before, I was setting sound volume programmatically using this approach:
MPVolumeView *volumeView = [[MPVolumeView alloc] init];
UISlider *volumeViewSlider = nil;
for (UIView *view in [volumeView subviews])
{
if ([view.class.description isEqualToString:#"MPVolumeSlider"])
{
volumeViewSlider = (UISlider *)view;
break;
}
}
[volumeViewSlider setValue:0.5 animated:YES];
[volumeViewSlider sendActionsForControlEvents:UIControlEventTouchUpInside];
Till iOS 11.4 it was working well (even on iOS 11.3), but on iOS 11.4 it doesn't. Volume value remains unchanged. Can someone help with this issue? Thanks.
Changing volumeViewSlider.value after a small delay resolves problem.
- (IBAction)increase:(id)sender {
MPVolumeView *volumeView = [[MPVolumeView alloc] init];
UISlider *volumeViewSlider = nil;
for (UIView *view in volumeView.subviews) {
if ([view isKindOfClass:[UISlider class]]) {
volumeViewSlider = (UISlider *)view;
break;
}
}
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(0.01 * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{
volumeViewSlider.value = 0.5f;
});
}
Swift version
I solved it by adding new MPVolumeView to my UIViewController view, otherwise it didn't set the volume anymore. As I added it to the controller I also need to set the volume view position to be outside of the screen to hide it from the user.
I prefer not to use delayed volume setting as it make things more complicated especially if you need to play sound immediately after setting the volume.
The code is in Swift 4:
let volumeControl = MPVolumeView(frame: CGRect(x: 0, y: 0, width: 120, height: 120))
override func viewDidLoad() {
self.view.addSubview(volumeControl);
}
override func viewDidLayoutSubviews() {
volumeControl.frame = CGRect(x: -120, y: -120, width: 100, height: 100);
}
func setVolume(_ volume: Float) {
let lst = volumeControl.subviews.filter{NSStringFromClass($0.classForCoder) == "MPVolumeSlider"}
let slider = lst.first as? UISlider
slider?.setValue(volume, animated: false)
}
I just added the MPVolumeView as a subview to another view (that was never drawn on screen).
This had to be done prior to any attempt to set or get the volume.
private let containerView = UIView()
private let volumeView = MPVolumeView()
func prepareWorkaround() {
self.containerView.addSubview(self.volumeView)
}
I had to have a MPVolumeView as subview to a view in the hierarchy for the hud not to show up on iOS 12. It needs to be slightly visible:
let volume = MPVolumeView(frame: .zero)
volume.setVolumeThumbImage(UIImage(), for: UIControl.State())
volume.isUserInteractionEnabled = false
volumelume.alpha = 0.0001
volume.showsRouteButton = false
view.addSubview(volume)
When setting the volume I get the slider from MPVolumeView as with previous posters and set the value:
func setVolumeLevel(_ volumeLevel: Float) {
guard let slider = volume.subviews.compactMap({ $0 as? UISlider }).first else {
return
}
slider.value = volumeLevel
}

Run a loop until UIPanGestureRecognizer ends

Hopefully this isn't too vague for the mods.
I want to make a user interaction similar to the volume controls on some hifi's where you move a dial left or right to change the volume but rather than turning the dial complete revolutions you turn it left or right slightly and the more you turn it the faster the volume changes until you let go and it pings back to the middle.
In my app I want to use a UIPanGestureRecogniser where as the user pans up from the middle the volume goes up, the further from the middle the faster the increase. When they pan below the mid point of the screen the volume goes down, again faster the further from the middle you are.
The area I'm stuck is how to make this happen with out locking up the UI. I can't just use the gestureRecognizer action selector as this is only called when there is movement, for this interaction to work the user will often keep their finger in a single location while waiting for the right volume to be reached.
I feel like I want to set a loop running outside the gesturerecogniser selector and have it monitor a class variable that the gets updated when the gesture moves or ends. If it do this in the gesturerecogniser selector it will get keep running....
If this were an embedded system I would just set up some kind of interrupt based polling to check what the input control was at and keep adding to the volume until it was back to middle - can't find the comparable for iOS here.
Suggestions would be welcome, sorry mods if this is too vague - it's more of a framework methodology question that a specific code issue.
Interesting question I wrote a sample for you which must be what you want:
Objective-C code:
#import "ViewController.h"
#interface ViewController ()
#property float theValue;
#property NSTimer *timer;
#property bool needRecord;
#property UIView *dot;
#end
#implementation ViewController
- (void)viewDidLoad {
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
self.needRecord = NO;
self.theValue = 0;
UIView *circle = [[UIView alloc] initWithFrame:CGRectMake(0, 0, 50, 50)];
circle.layer.borderWidth = 3;
circle.layer.borderColor = [UIColor redColor].CGColor;
circle.layer.cornerRadius = 25;
circle.center = self.view.center;
[self.view addSubview:circle];
self.dot = [[UIView alloc] initWithFrame:CGRectMake(0, 0, 40, 40)];
self.dot.backgroundColor = [UIColor redColor];
self.dot.layer.cornerRadius = 20;
self.dot.center = self.view.center;
[self.view addSubview:self.dot];
UIPanGestureRecognizer *pan = [[UIPanGestureRecognizer alloc] initWithTarget:self action:#selector(panHandle:)];
[self.dot addGestureRecognizer:pan];
self.timer = [NSTimer scheduledTimerWithTimeInterval:0.1 target:self selector:#selector(timerFire) userInfo:nil repeats:YES];
}
-(void)panHandle:(UIPanGestureRecognizer *)pan{
CGPoint pt = [pan translationInView:self.view];
// NSLog([NSString stringWithFormat:#"pt.y = %f",pt.y]);
switch (pan.state) {
case UIGestureRecognizerStateBegan:
[self draggingStart];
break;
case UIGestureRecognizerStateChanged:
self.dot.center = CGPointMake(self.view.center.x, self.view.center.y + pt.y);
break;
case UIGestureRecognizerStateEnded:
[self draggingEnned];
break;
default:
break;
}
}
-(void)draggingStart{
self.needRecord = YES;
}
-(void)draggingEnned{
self.needRecord = NO;
[UIView animateWithDuration:0.5 animations:^{
self.dot.center = self.view.center;
}];
}
-(void)timerFire{
if (self.needRecord) {
float distance = self.dot.center.y - self.view.center.y;
// NSLog([NSString stringWithFormat:#"distance = %f",distance]);
self.theValue -= distance/1000;
NSLog([NSString stringWithFormat:#"theValue = %f",self.theValue]);
}
}
#end
I'm learning Swift right now, so if you need, this is Swift code:
class ViewController: UIViewController {
var lbInfo:UILabel?;
var theValue:Float?;
var timer:NSTimer?;
var needRecord:Bool?;
var circle:UIView?;
var dot:UIView?;
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view, typically from a nib.
needRecord = false;
theValue = 0;
lbInfo = UILabel(frame: CGRect(x: 50, y: 50, width: UIScreen.mainScreen().bounds.width-100, height: 30));
lbInfo!.textAlignment = NSTextAlignment.Center;
lbInfo!.text = "Look at here!";
self.view.addSubview(lbInfo!);
circle = UIView(frame: CGRect(x: 0, y: 0, width: 50, height: 50));
circle!.layer.borderWidth = 3;
circle!.layer.borderColor = UIColor.redColor().CGColor;
circle!.layer.cornerRadius = 25;
circle!.center = self.view.center;
self.view.addSubview(circle!);
dot = UIView(frame: CGRect(x: 0, y: 0, width: 40, height: 40));
dot!.backgroundColor = UIColor.redColor();
dot!.layer.cornerRadius = 20;
dot!.center = self.view.center;
self.view.addSubview(dot!);
let pan = UIPanGestureRecognizer(target: self, action: #selector(ViewController.panhandler));
dot!.addGestureRecognizer(pan);
timer = NSTimer.scheduledTimerWithTimeInterval(0.1, target: self, selector: #selector(ViewController.timerFire), userInfo: nil, repeats: true)
}
func panhandler(pan: UIPanGestureRecognizer) -> Void {
let pt = pan.translationInView(self.view);
switch pan.state {
case UIGestureRecognizerState.Began:
draggingStart();
case UIGestureRecognizerState.Changed:
self.dot!.center = CGPoint(x: self.view.center.x, y: self.view.center.y + pt.y);
case UIGestureRecognizerState.Ended:
draggingEnded();
default:
break;
}
}
func draggingStart() -> Void {
needRecord = true;
}
func draggingEnded() -> Void {
needRecord = false;
UIView.animateWithDuration(0.1, animations: {
self.dot!.center = self.view.center;
});
}
#objc func timerFire() -> Void {
if(needRecord!){
let distance:Float = Float(self.dot!.center.y) - Float(self.view.center.y);
theValue! -= distance/1000;
self.lbInfo!.text = String(format: "%.2f", theValue!);
}
}
}
Hope it can help you.
If you still need some advice, just leave it here, I will check it latter.
This sounds like a fun challenge. I don't totally understand the hifi thing. But what I'm picturing is a circular knob with a small dot at 9 o'clock almost at the edge. When you turn the knob to the right the dot moves towards 12 oclock and the volume increases, accelerating faster the further the dot is from 9 o'clock. It continues to increase as long as the dot is above 9, just the acceleration of increase changes.
When you turn left, the dot goes towards 6 oclock, and the volume decreases, and the acceleration depends on the radial distance from 9 oclock. If this assumption is correct, I think the following would work..
I would solve this with a little trigonometry. To get the acceleration, you need the angle from the 9 o'oclock axis(negative x axis). A positive angle is increasing the volume, a negative angle is decreasing the volume, and the acceleration depends on the degree of rotation. This angle will also give you a transform that you can apply to the view to change the dot's place. I don't think this will take anything too fancy code wise.. In this case, I have made the maximum rotation to be 90 degree, or pi/2 radians. If you can actually turn the knob more than that, it would take some code changes.
var volume: Double = 0
var maxVolume: Double = 100
var increasing : Bool
var multiplier: Double = 0
var cartesianTransform = CGAffineTransform()
let knobViewFrame = CGRect(x: 50, y: 50, width: 100, height: 100)
let knobRadius: CGFloat = 45
let knob = UIView()
var timer : NSTimer?
func setTransform() {
self.cartesianTransform = CGAffineTransform(a: 1/knobRadius, b: 0, c: 0, d: -1/knobRadius, tx: knobViewFrame.width/2, ty: knobViewFrame.height * 1.5)
// admittedly, I always have to play with these things to get them right, so there may be some errors. This transform should turn the view into a plane with (0,0) at the center, and the knob's circle at (-1,0)
}
func panKnob(pan: UIPanGestureRecognizer) {
let pointInCartesian = CGPointApplyAffineTransform(pan.locationInView(pan.view!), cartesianTransform)
if pan.state == .Began {
increasing = pointInCartesian.y > 0
timer = NSTimer.scheduledTimerWithTimeInterval(0.1, target: self, selector: #selector(increaseVolume), userInfo: nil, repeats: true)
}
let arctangent = CGFloat(M_PI) - atan2(pointInCartesian.y, pointInCartesian.x)
let maxAngle = increasing ? CGFloat(M_PI)/2 : -CGFloat(M_PI)/2
var angle: CGFloat
if increasing {
angle = arctangent > maxAngle ? maxAngle : arctangent
} else {
angle = arctangent < maxAngle ? maxAngle : arctangent
}
knob.transform = CGAffineTransformMakeRotation(angle)
self.multiplier = Double(angle) * 10
if pan.state == .Ended || pan.state == .Cancelled || pan.state == .Failed {
timer?.invalidate()
UIView.animateWithDuration(0.75, delay: 0, options: .CurveEaseIn, animations: {self.knob.transform = CGAffineTransformIdentity}, completion: nil)
}
}
func increaseVolume() {
let newVolume = volume + multiplier
volume = newVolume > maxVolume ? maxVolume : (newVolume < 0 ? 0 : newVolume)
if volume == maxVolume || volume == 0 || multiplier == 0 {
timer?.invalidate()
}
}
I haven't tested the above, but this seemed like a cool puzzle. The multiplier only changes when the angle changes, and the volume keeps adding the multiplier while the timer is valid. If you want the volume to accelerate continuously without angle changing, move the multiplier change to the timer's selector, and keep the angle as a class variable so you know how fast to accelerate it.
edit: You could probably do it without the transform, by just getting the delta between the dot and the locationInView.

Swift - Add UIImageView as subview of UIWebView scrollView and scaling

I have a UIWebView and I have successfully added a UIImage view to the UIWebView’s scrollView like so:
let localUrl = String(format:"%#/%#", PDFFilePath, fileNameGroup)
let url = NSURL.fileURLWithPath(localUrl)
panRecognizer = UITapGestureRecognizer(target: self, action: #selector(panDetected))
pinchRecognizer = UIPinchGestureRecognizer(target: self, action: #selector(pinchDetected))
panRecognizer.delegate = self
pinchRecognizer.delegate = self
webview = UIWebView()
webview.frame = self.view.bounds
webview.scrollView.frame = webview.frame
webview.userInteractionEnabled = true
webview.scalesPageToFit = true
webview.becomeFirstResponder()
webview.delegate = self
webview.scrollView.delegate = self
self.view.addSubview(webview)
webview.loadRequest(NSURLRequest(URL:url))
webview.gestureRecognizers = [pinchRecognizer, panRecognizer]
let stampView:StampAnnotation = StampAnnotation(imageIcon: UIImage(named: "approved.png"), location: CGPointMake(currentPoint.x, currentPoint.y))
self.webview.scrollView.addSubview(stampView)
My UIWebView scrollView is scalable. Now I am looking for away to have my UIImageView (StampAnnotation is a class and UIImageView is its subclass) scale when the scrollView scales. So if the user zooms in on the scrollView, the UIImageView will get bigger and stay in a fixed position and if the user zooms out, the UIImageView will get smaller while the scrollView gets smaller while staying in a fixed position.
I really hope that makes sense. I have tried the following:
func pinchDetected(recognizer:UIPinchGestureRecognizer)
{
for views in webview.scrollView.subviews
{
if(views.isKindOfClass(UIImageView))
{
views.transform = CGAffineTransformScale(views.transform, recognizer.scale, recognizer.scale)
recognizer.scale = 1
}
}
if(appDelegate.annotationSelected == 0)
{
webview.scalesPageToFit = true
}
else
{
webview.scalesPageToFit = false
}
}
but this does nothing, if I remove this line:
recognizer.scale = 1
it scales way too big too fast. My question is, how do I get my UIImageView to scale when the UIWebview’s scrollView scrolls?
Any help would be appreciated.
This solved my problem.
func scrollViewDidZoom(scrollView: UIScrollView) {
for views in webview.scrollView.subviews
{
if(views.isKindOfClass(UIImageView))
{
views.transform = CGAffineTransformMakeScale(scrollView.zoomScale, scrollView.zoomScale)
}
}
}
No it does not stay in a fixed position on the page, but I think that is a constraints issue?
You were close...
1) Add a property to hold onto an external reference for your stampViewFrame:
var stampViewFrame = CGRect(x: 100, y: 100, width: 100, height: 100)
2) Replace your scrollViewDidZoom() with this:
func scrollViewDidZoom(scrollView: UIScrollView) {
for views in webView.scrollView.subviews
{
if(views.isKindOfClass(UIImageView))
{
views.frame = CGRect(x: stampViewFrame.origin.x * scrollView.zoomScale, y: stampViewFrame.origin.y * scrollView.zoomScale, width: stampViewFrame.width * scrollView.zoomScale, height: stampViewFrame.height * scrollView.zoomScale)
}
}
}
3) Finally, because the zoom scale resets to 1 at the begining of each new zooming action, you need to adjust the value of your stampViewFrame property:
func scrollViewDidEndZooming(scrollView: UIScrollView, withView view: UIView?, atScale scale: CGFloat) {
stampViewFrame = CGRect(x: stampViewFrame.origin.x * scale, y: stampViewFrame.origin.y * scale, width: stampViewFrame.width * scale, height: stampViewFrame.height * scale)
}
I also tried to answer your other question about layout during orientation change, but I now have a much better understanding of what you are trying to do. If you want your stampView to always be on in the same place relative to the web content, you have to get into HTML/JS because the webpage lays itself out dynamically. A much much more simple (and hopefully close enough) solution would be to add the following:
override func viewDidLayoutSubviews() {
super.viewDidLayoutSubviews()
webView.frame = view.bounds
stampView.frame = stampViewFrame
}
Use the scroll delegate method of scrollViewDidZoom :
func scrollViewDidZoom(scrollView: UIScrollView){
//Change the subview of scroll frame as per the scroll frame scale
//rect = initial position & size of the image.<class instance>
stampView.frame = CGRectMake((CGRectGetMaxX(rect)-rect.size.width)*webView.scrollView.zoomScale, (CGRectGetMaxY(rect)-rect.size.height)*webView.scrollView.zoomScale, rect.width*webView.scrollView.zoomScale,rect.height*webView.scrollView.zoomScale)
}

UIPageViewController with Peeking

I'm trying to create a page browser by using a UIPageViewController in Interface Builder that allows displaying part of the adjacent pages (aka peeking). I've been following a tutorial at http://www.appcoda.com/uipageviewcontroller-storyboard-tutorial/ (and ported it into Swift) which is rather straightforward but I can't quite figure out what changes to make to have a page displayed in the UIPageViewController which is smaller than the screen (and centered) and having the adjacent pages appear partly on the screen left and right.
I've tried to resize the page content view controller in IB and with code but the page view controller will still fill the whole screen.
Does anyone know of a tutorial that covers this functionality or what is a good approach to get the desired effect?
This screenshot below from Bamboo Paper shows what I'm trying to achieve...
Like #davew said, peeking views will need to use UIScrollView. I too searched for a way to use UIPageViewController but couldn't find any resource.
Using UIScrollView to make this feature was less painful that I had imagined.
Here is a simple example to see the basic controls in action.
First: make a UIViewController, then in the viewDidLoad method, add the following code:
float pad = 20;
NSArray* items = #[#"One", #"Two", #"Three", #"Four"];
self.view.backgroundColor = [UIColor greenColor];
UIScrollView* pageScrollView = [[UIScrollView alloc] initWithFrame:self.view.frame];
pageScrollView.opaque = NO;
pageScrollView.showsHorizontalScrollIndicator = NO;
pageScrollView.clipsToBounds = NO;
pageScrollView.pagingEnabled = YES;
adjustFrame(pageScrollView, pad, deviceH()/4, -pad*3, -deviceH()/2);
[self.view addSubview: pageScrollView];
float w = pageScrollView.frame.size.width;
for(int i = 0; i < [items count]; i++){
UIView* view = [[UIView alloc] initWithFrame:pageScrollView.bounds];
view.backgroundColor = [UIColor blueColor];
setFrameX(view, (i*w)+pad);
setFrameW(view, w-(pad*1));
[pageScrollView addSubview:view];
}
pageScrollView.contentSize = CGSizeMake(w*[items count], pageScrollView.frame.size.height);
FYI, I used these util functions to adjust the size of the view frames; I get sick of manually changing them with 3+ lines of code.
Update
I have wrapped up this code in a simple ViewController and put it on GitHub
https://github.com/kjantzer/peek-page-view-controller
It is in no way complete, but it's a working start.
I was just searching for a good solution to the same feature. I found a nice tutorial on Ray Wenderlich's site titled "How To Use UIScrollView to Scroll and Zoom Content". It illustrates multiple things you can do with UIScrollView. The fourth and final is "Viewing Previous/Next Pages" which is your "peek" feature.
I haven't implemented this yet, but the key steps seem to be:
Create a UIScrollView narrower than your screen
Turn on Paging Enabled
Turn off Clip Subviews
Fill the UIScrollView with your pages side by side
Embed the UIScrollView inside a UIView that fills the width of the screen so that you can capture and pass touches outside the Scroll View
I rewrite Kevin Jantzer answer in swift 4 and it's works!
override func viewDidLoad() {
super.viewDidLoad()
let pad: CGFloat = 20
let items: [UIColor] = [.blue, .yellow, .red, .green]
self.view.backgroundColor = .white
var pageScrollView = UIScrollView(frame: self.view.frame)
pageScrollView.isOpaque = false
pageScrollView.showsHorizontalScrollIndicator = false
pageScrollView.clipsToBounds = false
pageScrollView.isPagingEnabled = true
adjustFrame(myView: pageScrollView, x: pad, y: UIScreen.main.bounds.height / 4, w: -pad * 3, h: -UIScreen.main.bounds.height/2)
self.view.addSubview(pageScrollView)
let w = pageScrollView.frame.size.width
for (i, item) in items.enumerated() {
let myView = UIView(frame: pageScrollView.bounds)
myView.backgroundColor = item
setFrameX(myView: myView, x: (CGFloat(i) * w) + pad);
setFrameW(myView: myView, w: w-(pad*1));
pageScrollView.addSubview(myView)
}
pageScrollView.contentSize = CGSize(width: w * CGFloat(items.count), height: pageScrollView.frame.size.height);
}
func setFrame(myView: UIView, x: CGFloat?, y: CGFloat?, w: CGFloat?, h: CGFloat?){
var f = myView.frame
if let safeX = x {
f.origin = CGPoint(x: safeX, y: f.origin.y)
}
if let safeY = y {
f.origin = CGPoint(x: f.origin.x, y: safeY)
}
if let safeW = w {
f.size.width = safeW
}
if let safeH = h {
f.size.height = safeH
}
myView.frame = f
}
func setFrameX(myView: UIView, x: CGFloat) {
setFrame(myView: myView, x: x, y: nil, w: nil, h: nil)
}
func setFrameY(myView: UIView, y: CGFloat) {
setFrame(myView: myView, x: nil, y: y, w: nil, h: nil)
}
func setFrameW(myView: UIView, w: CGFloat) {
setFrame(myView: myView, x: nil, y: nil, w: w, h: nil)
}
func setFrameH(myView: UIView, h: CGFloat) {
setFrame(myView: myView, x: nil, y: nil, w: nil, h: h)
}
func adjustFrame(f: CGRect, x: CGFloat?, y: CGFloat?, w: CGFloat?, h: CGFloat?) -> CGRect {
var rect = f
if let safeX = x {
rect.origin = CGPoint(x: rect.origin.x + safeX, y: f.origin.y)
}
if let safeY = y {
rect.origin = CGPoint(x: f.origin.x, y: rect.origin.y + safeY)
}
if let safeW = w {
rect.size.width = safeW + rect.size.width
}
if let safeH = h {
rect.size.height = safeH + rect.size.height
}
return rect
}
func adjustFrame(myView: UIView, x: CGFloat, y: CGFloat, w: CGFloat, h: CGFloat) {
myView.frame = adjustFrame(f: myView.frame, x: x, y: y, w: w, h: h);
}
}

Can UIPinchGestureRecognizer and UIPanGestureRecognizer Be Merged?

I am struggling a bit trying to figure out if it is possible to create a single combined gesture recognizer that combines UIPinchGestureRecognizer with UIPanGestureRecognizer.
I am using pan for view translation and pinch for view scaling. I am doing incremental matrix concatenation to derive a resultant final transformation matrix that is applied to the view. This matrix has both scale and translation. Using separate gesture recognizers leads to a jittery movement/scaling. Not what I want. Thus, I want to handle concatenation of scale and translation once within a single gesture. Can someone please shed some light on how to do this?
6/14/14: Updated Sample Code for iOS 7+ with ARC.
The UIGestureRecognizers can work together and you just need to make sure you don't trash the current view's transform matrix. Use the CGAffineTransformScale method and related methods that take a transform as input, rather than creating it from scratch (unless you maintain the current rotation, scale, or translation yourself.
Download Xcode Project
Sample UIPinchGesture project on Github.
Note: iOS 7 behaves weird with UIView's in IB that have Pan/Pinch/Rotate gestures applied. iOS 8 fixes it, but my workaround is to add all views in code like this code example.
Demo Video
Add them to a view and conform to the UIGestureRecognizerDelegate protocol
#interface ViewController () <UIGestureRecognizerDelegate>
#end
#implementation ViewController
- (void)viewDidLoad
{
[super viewDidLoad];
UIView *blueView = [[UIView alloc] initWithFrame:CGRectMake(100, 100, 150, 150)];
blueView.backgroundColor = [UIColor blueColor];
[self.view addSubview:blueView];
[self addMovementGesturesToView:blueView];
// UIImageView's and UILabel's don't have userInteractionEnabled by default!
UIImageView *imageView = [[UIImageView alloc] initWithImage:[UIImage imageNamed:#"BombDodge.png"]]; // Any image in Xcode project
imageView.center = CGPointMake(100, 250);
[imageView sizeToFit];
[self.view addSubview:imageView];
[self addMovementGesturesToView:imageView];
// Note: Changing the font size would be crisper than zooming a font!
UILabel *label = [[UILabel alloc] init];
label.text = #"Hello Gestures!";
label.font = [UIFont systemFontOfSize:30];
label.textColor = [UIColor blackColor];
[label sizeToFit];
label.center = CGPointMake(100, 400);
[self.view addSubview:label];
[self addMovementGesturesToView:label];
}
- (void)addMovementGesturesToView:(UIView *)view {
view.userInteractionEnabled = YES; // Enable user interaction
UIPanGestureRecognizer *panGesture = [[UIPanGestureRecognizer alloc] initWithTarget:self action:#selector(handlePanGesture:)];
panGesture.delegate = self;
[view addGestureRecognizer:panGesture];
UIPinchGestureRecognizer *pinchGesture = [[UIPinchGestureRecognizer alloc] initWithTarget:self action:#selector(handlePinchGesture:)];
pinchGesture.delegate = self;
[view addGestureRecognizer:pinchGesture];
}
Implement gesture methods
- (void)handlePanGesture:(UIPanGestureRecognizer *)panGesture {
CGPoint translation = [panGesture translationInView:panGesture.view.superview];
if (UIGestureRecognizerStateBegan == panGesture.state ||UIGestureRecognizerStateChanged == panGesture.state) {
panGesture.view.center = CGPointMake(panGesture.view.center.x + translation.x,
panGesture.view.center.y + translation.y);
// Reset translation, so we can get translation delta's (i.e. change in translation)
[panGesture setTranslation:CGPointZero inView:self.view];
}
// Don't need any logic for ended/failed/canceled states
}
- (void)handlePinchGesture:(UIPinchGestureRecognizer *)pinchGesture {
if (UIGestureRecognizerStateBegan == pinchGesture.state ||
UIGestureRecognizerStateChanged == pinchGesture.state) {
// Use the x or y scale, they should be the same for typical zooming (non-skewing)
float currentScale = [[pinchGesture.view.layer valueForKeyPath:#"transform.scale.x"] floatValue];
// Variables to adjust the max/min values of zoom
float minScale = 1.0;
float maxScale = 2.0;
float zoomSpeed = .5;
float deltaScale = pinchGesture.scale;
// You need to translate the zoom to 0 (origin) so that you
// can multiply a speed factor and then translate back to "zoomSpace" around 1
deltaScale = ((deltaScale - 1) * zoomSpeed) + 1;
// Limit to min/max size (i.e maxScale = 2, current scale = 2, 2/2 = 1.0)
// A deltaScale is ~0.99 for decreasing or ~1.01 for increasing
// A deltaScale of 1.0 will maintain the zoom size
deltaScale = MIN(deltaScale, maxScale / currentScale);
deltaScale = MAX(deltaScale, minScale / currentScale);
CGAffineTransform zoomTransform = CGAffineTransformScale(pinchGesture.view.transform, deltaScale, deltaScale);
pinchGesture.view.transform = zoomTransform;
// Reset to 1 for scale delta's
// Note: not 0, or we won't see a size: 0 * width = 0
pinchGesture.scale = 1;
}
}
- (BOOL)gestureRecognizer:(UIGestureRecognizer *)gestureRecognizer shouldRecognizeSimultaneouslyWithGestureRecognizer:(UIGestureRecognizer *)otherGestureRecognizer {
return YES; // Works for most use cases of pinch + zoom + pan
}
Resources
Xcode Gesture Sample Project
Apple's Gestures Guide
If anyone is interested in a Swift implementation of this using Metal to do the rendering, I have a project available here.
Swift
Many thanks a lot to Paul!!! Here is his Swift version:
import UIKit
class ViewController: UIViewController, UIGestureRecognizerDelegate {
var editorView: EditorView!
override func viewDidLoad() {
super.viewDidLoad()
let blueView = UIView(frame: .init(x: 100, y: 100, width: 300, height: 300))
view.addSubview(blueView)
blueView.backgroundColor = .blue
addMovementGesturesToView(blueView)
}
func addMovementGesturesToView(_ view: UIView) {
view.isUserInteractionEnabled = true
let panGesture = UIPanGestureRecognizer(target: self, action: #selector(handlePanGesture(_:)))
panGesture.delegate = self
view.addGestureRecognizer(panGesture)
let pinchGesture = UIPinchGestureRecognizer(target: self, action: #selector(handlePinchGesture(_:)))
pinchGesture.delegate = self
view.addGestureRecognizer(pinchGesture)
}
#objc private func handlePanGesture(_ panGesture: UIPanGestureRecognizer) {
guard let panView = panGesture.view else { return }
let translation = panGesture.translation(in: panView.superview)
if panGesture.state == .began || panGesture.state == .changed {
panGesture.view?.center = CGPoint(x: panView.center.x + translation.x, y: panView.center.y + translation.y)
// Reset translation, so we can get translation delta's (i.e. change in translation)
panGesture.setTranslation(.zero, in: self.view)
}
// Don't need any logic for ended/failed/canceled states
}
#objc private func handlePinchGesture(_ pinchGesture: UIPinchGestureRecognizer) {
guard let pinchView = pinchGesture.view else { return }
if pinchGesture.state == .began || pinchGesture.state == .changed {
let currentScale = scale(for: pinchView.transform)
// Variables to adjust the max/min values of zoom
let minScale: CGFloat = 0.2
let maxScale: CGFloat = 3
let zoomSpeed: CGFloat = 0.8
var deltaScale = pinchGesture.scale
// You need to translate the zoom to 0 (origin) so that you
// can multiply a speed factor and then translate back to "zoomSpace" around 1
deltaScale = ((deltaScale - 1) * zoomSpeed) + 1
// Limit to min/max size (i.e maxScale = 2, current scale = 2, 2/2 = 1.0)
// A deltaScale is ~0.99 for decreasing or ~1.01 for increasing
// A deltaScale of 1.0 will maintain the zoom size
deltaScale = min(deltaScale, maxScale / currentScale)
deltaScale = max(deltaScale, minScale / currentScale)
let zoomTransform = pinchView.transform.scaledBy(x: deltaScale, y: deltaScale)
pinchView.transform = zoomTransform
// Reset to 1 for scale delta's
// Note: not 0, or we won't see a size: 0 * width = 0
pinchGesture.scale = 1
}
}
func gestureRecognizer(_ gestureRecognizer: UIGestureRecognizer, shouldRecognizeSimultaneouslyWith otherGestureRecognizer: UIGestureRecognizer) -> Bool {
return true
}
private func scale(for transform: CGAffineTransform) -> CGFloat {
return sqrt(CGFloat(transform.a * transform.a + transform.c * transform.c))
}
}
Demo (on Simulator):

Resources