Overlap and Swapping Images in iOS - ios

I want to swap two images. When I move the drag image and it overlaps 50% or greater on any other image in the view it must be swapped. The problem is how can I check the drag image is 50% or more than 50% overlap with the other image. Please help and suggest the logic with a code example.
The code I am trying is :
if (imgVw.tag != self.tag) {
CGRect imgVwRect = CGRectMake(imgVw.frame.origin.x +(imgVw.frame.size.width/2), imgVw.frame.origin.y+(imgVw.frame.size.height/2), imgVw.frame.size.width, imgVw.frame.size.height);
CGRect movingImgRect = CGRectMake(newCenter.x+self.frame.size.width, newCenter.y+self.frame.size.height, self.frame.size.width, self.frame.size.height);
if (movingImgRect.origin.x >= imgVwRect.origin.x && movingImgRect.origin.y >= imgVwRect.origin.y)
{
NSLog(#"img view tag %lu",imgVw.tag);
UIImage *tempImg = self.image;
[self setImage:imgVw.image];
[imgVw setImage:tempImg];
}
}

This is pretty straightforward.
Use the method CGRectIntersection to calculate the intersection of the 2 rects.
Then compare the area of the intersection with the area of the moving rect. Something like this (assuming I understand your code
if (imgVw.tag != self.tag)
{
CGRect imgVwRect = CGRectMake(imgVw.frame.origin.x +(imgVw.frame.size.width/2),
imgVw.frame.origin.y+(imgVw.frame.size.height/2),
imgVw.frame.size.width,
imgVw.frame.size.height);
CGRect movingImgRect = CGRectMake(newCenter.x+self.frame.size.width,
newCenter.y+self.frame.size.height,
self.frame.size.width,
self.frame.size.height);
//Figure out the intersection rect of the 2 rectangles
CGRect intersectionRect = CGRectIntersection(imgVwRect, movingImgRect);
//Find the area of the intersection
CGFloat xArea = intersectionRect.size.height * intersectionRect.size.width;
//Find the area of the moving image
//(this code could be done once and saved in an iVar)
CGFloat movingImgArea = movingImgRect.size.height * movingImgRect.size.width;
//Is the intersection >= 1/2 the size of the moving image?
if (xArea*2 >= movingImgArea)
{
//Do whatever you need to do when the 2 images overlap by >= 50%
}
}

Related

Get the pixel position of pan On UIImage in a UIImageView

I need the actual pixcel position not the positoin with respect to the UIImageView frame, but the actual pixcel position on UIImage.
UIpangesture recognizer giver the location in UIimageView, so it is of no use.
I can multiply the x and y with scale, but the UIImage scale is always 0.
I need to crop a circular area from UIImage make it blur and place it exactly at the same position
Flow:
Crop circular area from an UIimage usin:g CGImageCreateWithImageInRect
Then roud rect the image using: [[UIBezierPath bezierPathWithRoundedRect:
Blur the round rect image using CIGaussianBlur
Place the round rect blurred image at the x,y position
In the first step I need the actual pixel position where the user tapped
It depends on the image view content mode.
For the scale to fill mode you need to simply multiply the coordinates with image to view ratio:
CGPoint pointOnImage = CGPointMake(pointOfTouch.x*(imageSize.width/frameSize.width), pointOfTouch.y*(imageSize.height/frameSize.height));
For all other modes you need to compute the actual image frame inside the view which have different procedures then.
Adding aspect fit mode from comments:
For aspect fit you need to compute the actual image frame which can be smaller then the image view frame in one of the dimensions and is placed in center:
CGSize imageSize; // the original image size
CGSize imageViewSize; // the image view size
CGFloat imageRatio = imageSize.width/imageSize.height;
CGFloat viewRatio = imageViewSize.width/imageViewSize.height;
CGRect imageFrame = CGRectMake(.0f, .0f, imageViewSize.width, imageViewSize.height);
if(imageRatio > viewRatio) {
// image has room on top and bottom but fits perfectly on left and right
CGSize displayedImageSize = CGSizeMake(imageViewSize.width, imageViewSize.width / imageRatio);
imageFrame = CGRectMake(.0f, (imageViewSize.height-displayedImageSize.height)*.5f, displayedImageSize.width, displayedImageSize.height);
}
else if(imageRatio < viewRatio) {
// image has room on left and right but fits perfectly on top and bottom
CGSize displayedImageSize = CGSizeMake(imageViewSize.height * imageRatio, imageViewSize.height);
imageFrame = CGRectMake((imageViewSize.width-displayedImageSize.width)*.5f, .0f, displayedImageSize.width, displayedImageSize.height);
}
// transform the coordinate
CGPoint locationInImageView; // received from touch
CGPoint locationOnImage = CGPointMake(locationInImageView.x, locationInImageView.y); // copy the original point
locationOnImage = CGPointMake(locationOnImage.x - imageFrame.origin.x, locationOnImage.y - imageFrame.origin.y); // translate to fix the origin
locationOnImage = CGPointMake(locationOnImage.x/imageFrame.size.width, locationOnImage.y/imageFrame.size.height); // transform to relative coordinates
locationOnImage = CGPointMake(locationOnImage.x*imageSize.width, locationOnImage.y*imageSize.height); // scale to original image coordinates
Just a note if you want to ransfer to aspect fill all you need to do is swap < and > in both of the if statements.

Combine 2 UIImage views with zoom and offset

thanks for reading.
I have a background image and a foreground image. The foreground image is in a UIScrollView so can be resized and repositioned over the background image. The background image is set as Aspect Fit. I have a function that combines the two UIImages into a new UIImage. That works fine, but what I can't get right is the x,y co-ordinates of one view over the other.
Here's some code:
CGFloat bgImageScale = self.backgroundImageView.bounds.size.height / self.bgImage.size.height; // Gives me the AspectFit scale.
CGFloat bgOffsetX = (self.backgroundImageView.bounds.size.width - self.bgImage.size.width * bgImageScale) / 2.0;
CGFloat bgOffsetY = 0.0;
CGFloat fgImageScale = self.fgImageScrollView.zoomScale;
CGFloat fgOffsetX = -self.fgImageScrollView.contentOffset.x;
CGFloat fgOffsetY = -self.fgImageScrollView.contentOffset.y;
CGPoint imageOffset = CGPointMake((fgOffsetX - bgOffsetX) * bgImageScale, (fgOffsetY - bgOffsetY) * bgImageScale);
[self.delegate completedOverlayImage:
[self mergeImage:self.fgImage
withImage:self.bgImage
usingAlpha:0.5f
withOffset:imageOffset
andScale:fgImageScale / bgImageScale
]];
In brief, the compeletedOverlayImage code does the following relevant bit:
[bottomImage drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
[topImage drawInRect:CGRectMake(imageOffset.x, imageOffset.y, newSize.width*imageScale, newSize.height*imageScale) blendMode:kCGBlendModeNormal alpha:alpha];
So I just can't get the imageOffset stuff right to get the new image overlaid the same as it appeared on-screen.
By the way, this app is iOS 7 and up only.
Can anyone help? Thanks.
Take a look at the UIView convertRect:toView: method.
Assuming that your view hierarchy looks like this
SomeViewController
View // the view controller's main view
BgView // the background view (UIImageView)
ScrollView
FgView // the foreground view (UIImageView)
If the foreground image is a UIImageView that's a child of the scroll view, then you can convert the FgView frame coordinates to the main view coordinate system with a line of code like this
CGRect foregroundFrame = [self.foregroundImageView convertRect:self.foregroundImageView.bounds toView:self.view];
Since the BgView's frame is already in mainView coordinates, this will give you both frames in the same coordinate system.
OK, I solved it myself. So for anyone else attempting the same thing, here's the code:
CGFloat bgImageScale = self.backgroundImageView.bounds.size.height / self.bgImage.size.height;
CGFloat bgOffsetX = (self.backgroundImageView.bounds.size.width - self.bgImage.size.width * bgImageScale) / 2.0;
CGFloat bgOffsetY = 0.0;
CGFloat fgImageScale = self.fgImageScrollView.zoomScale;
CGFloat fgRelativeZoom = fgImageScale / bgImageScale; // How much is fg zoomed compared to bg?
CGFloat fgOffsetX = -self.fgImageScrollView.contentOffset.x; // We want the offset of the (0,0), not the offset of the viewport. Hence, negative.
CGFloat fgOffsetY = -self.fgImageScrollView.contentOffset.y;
CGPoint imageOffset = CGPointMake(fgOffsetX / bgImageScale - bgOffsetX, fgOffsetY / bgImageScale - bgOffsetY);
[self.delegate completedOverlayImage:
[self mergeImage:self.fgImage
withImage:self.bgImage
usingAlpha:0.5f
withOffset:imageOffset
andScale:fgRelativeZoom
]];
and the function to combine the images (assuming iOS 7 which lets you set scale to zero in the UIGraphicsBeginContextWithOptions() call):
- (UIImage*) mergeImage:(UIImage*)topImage withImage:(UIImage*)bottomImage usingAlpha:(CGFloat)alpha withOffset:(CGPoint)imageOffset andScale:(CGFloat)imageScale {
int width = bottomImage.size.width;
int height = bottomImage.size.height;
CGSize newSize = CGSizeMake(width, height);
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
[bottomImage drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
[topImage drawInRect:CGRectMake(imageOffset.x, imageOffset.y, newSize.width*imageScale, newSize.height*imageScale) blendMode:kCGBlendModeNormal alpha:alpha];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Hope that helps someone else out there.

How do I position my UIScrollView's image properly when switching orientation?

I'm having a lot of trouble figuring out how best to reposition my UIScrollView's image view (I have a gallery kind of app going right now, similar to Photos.app, specifically when you're viewing a single image) when the orientation switches from portrait to landscape or vice-versa.
I know my best bet is to manipulate the contentOffset property, but I'm not sure what it should be changed to.
I've played around a lot, and it seems like for whatever reason 128 works really well. In my viewWillLayoutSubviews method for my view controller I have:
if (UIInterfaceOrientationIsLandscape([UIApplication sharedApplication].statusBarOrientation)) {
CGPoint newContentOffset = self.scrollView.contentOffset;
if (newContentOffset.x >= 128) {
newContentOffset.x -= 128.0;
}
else {
newContentOffset.x = 0.0;
}
newContentOffset.y += 128.0;
self.scrollView.contentOffset = newContentOffset;
}
else {
CGPoint newContentOffset = self.scrollView.contentOffset;
if (newContentOffset.y >= 128) {
newContentOffset.y -= 128.0;
}
else {
newContentOffset.y = 0.0;
}
newContentOffset.x += 128.0;
self.scrollView.contentOffset = newContentOffset;
}
And it works pretty well. I hate how it's using a magic number though, and I have no idea where this would come from.
Also, whenever I zoom the image I have it set to stay centred (just like Photos.app does):
- (void)centerScrollViewContent {
// Keep image view centered as user zooms
CGRect newImageViewFrame = self.imageView.frame;
// Center horizontally
if (newImageViewFrame.size.width < CGRectGetWidth(self.scrollView.bounds)) {
newImageViewFrame.origin.x = (CGRectGetWidth(self.scrollView.bounds) - CGRectGetWidth(self.imageView.frame)) / 2;
}
else {
newImageViewFrame.origin.x = 0;
}
// Center vertically
if (newImageViewFrame.size.height < CGRectGetHeight(self.scrollView.bounds)) {
newImageViewFrame.origin.y = (CGRectGetHeight(self.scrollView.bounds) - CGRectGetHeight(self.imageView.frame)) / 2;
}
else {
newImageViewFrame.origin.y = 0;
}
self.imageView.frame = newImageViewFrame;
}
So I need it to keep it positioned properly so it doesn't show black borders around the image when repositioned. (That's what the checks in the first block of code are for.)
Basically, I'm curious how to implement functionality like in Photos.app, where on rotate the scrollview intelligently repositions the content so that the middle of the visible content before the rotation is the same post-rotation, so it feels continuous.
You should change the UIScrollView's contentOffset property whenever the scrollView is layouting its subviews after its bounds value has been changed. Then when the interface orientation will be changed, UIScrollView's bounds will be changed accordingly updating the contentOffset.
To make things "right" you should subclass UIScrollView and make all the adjustments there. This will also allow you to easily reuse your "special" scrollView.
The contentOffset calculation function should be placed inside UIScrollView's layoutSubviews method. The problem is that this method is called not only when the bounds value is changed but also when srollView is zoomed or scrolled. So the bounds value should be tracked to hint if the layoutSubviews method is called due to a change in bounds as a consequence of the orientation change, or due to a pan or pinch gesture.
So the first part of the UIScrollView subclass should look like this:
- (id)initWithFrame:(CGRect)frame {
self = [super initWithFrame:frame];
if (self) {
// Set the prevBoundsSize to the initial bounds, so the first time
// layoutSubviews is called we won't do any contentOffset adjustments
self.prevBoundsSize = self.bounds.size;
}
return self;
}
- (void)layoutSubviews {
[super layoutSubviews];
if (!CGSizeEqualToSize(self.prevBoundsSize, self.bounds.size)) {
[self _adjustContentOffset];
self.prevBoundsSize = self.bounds.size;
}
[self _centerScrollViewContent];
}
Here, the layoutSubviews method is called every time the UIScrollView is panned, zoomed or its bounds are changed. The _centerScrollViewContent method is responsible for centering the zoomed view when its size becomes smaller than the size of the scrollView's bounds. And, it is called every time user pans or zooms the scrollView, or rotates the device. Its implementation is very similar to the implementation you provided in your question. The difference is that this method is written in the context of UIScrollView class and therefore instead of using self.imageView property to reference the zoomed view, which may not be available in the context of UIScrollView class, the viewForZoomingInScrollView: delegate method is used.
- (void)_centerScrollViewContent {
if ([self.delegate respondsToSelector:#selector(viewForZoomingInScrollView:)]) {
UIView *zoomView = [self.delegate viewForZoomingInScrollView:self];
CGRect frame = zoomView.frame;
if (self.contentSize.width < self.bounds.size.width) {
frame.origin.x = roundf((self.bounds.size.width - self.contentSize.width) / 2);
} else {
frame.origin.x = 0;
}
if (self.contentSize.height < self.bounds.size.height) {
frame.origin.y = roundf((self.bounds.size.height - self.contentSize.height) / 2);
} else {
frame.origin.y = 0;
}
zoomView.frame = frame;
}
}
But the more important thing here is the _adjustContentOffset method. This method is responsible for adjusting the contentOffset. Such that when UIScrollView's bounds value is changed the center point before the change will remain in center. And because of the condition statement, it is called only when UIScrollView's bounds is changed (e.g.: orientation change).
- (void)_adjustContentOffset {
if ([self.delegate respondsToSelector:#selector(viewForZoomingInScrollView:)]) {
UIView *zoomView = [self.delegate viewForZoomingInScrollView:self];
// Using contentOffset and bounds values before the bounds were changed (e.g.: interface orientation change),
// find the visible center point in the unscaled coordinate space of the zooming view.
CGPoint prevCenterPoint = (CGPoint){
.x = (self.prevContentOffset.x + roundf(self.prevBoundsSize.width / 2) - zoomView.frame.origin.x) / self.zoomScale,
.y = (self.prevContentOffset.y + roundf(self.prevBoundsSize.height / 2) - zoomView.frame.origin.y) / self.zoomScale,
};
// Here you can change zoomScale if required
// [self _changeZoomScaleIfNeeded];
// Calculate new contentOffset using the previously calculated center point and the new contentOffset and bounds values.
CGPoint contentOffset = CGPointMake(0.0, 0.0);
CGRect frame = zoomView.frame;
if (self.contentSize.width > self.bounds.size.width) {
frame.origin.x = 0;
contentOffset.x = prevCenterPoint.x * self.zoomScale - roundf(self.bounds.size.width / 2);
if (contentOffset.x < 0) {
contentOffset.x = 0;
} else if (contentOffset.x > self.contentSize.width - self.bounds.size.width) {
contentOffset.x = self.contentSize.width - self.bounds.size.width;
}
}
if (self.contentSize.height > self.bounds.size.height) {
frame.origin.y = 0;
contentOffset.y = prevCenterPoint.y * self.zoomScale - roundf(self.bounds.size.height / 2);
if (contentOffset.y < 0) {
contentOffset.y = 0;
} else if (contentOffset.y > self.contentSize.height - self.bounds.size.height) {
contentOffset.y = self.contentSize.height - self.bounds.size.height;
}
}
zoomView.frame = frame;
self.contentOffset = contentOffset;
}
}
Bonus
I've created a working SMScrollView class (here is link to GitHub) implementing the above behavior and additional bonuses:
You can notice that in Photos app, zooming a photo, then scrolling it to one of its boundaries and then rotating the device does not keep the center point in its place. Instead it sticks the scrollView to that boundary. And if you scroll to one of the corners and then rotate, the scrollView will be stick to that corner as well.
In addition to adjusting contentOffset you may find that you also want to adjust the scrollView's zoomScale. For example, assume you are viewing a photo in portrait mode that is scaled to fit the screen size. Then when you rotate the device to the landscape mode you may want to upscale the photo to take advantage of the available space.

Unwanted image animation when resizing UITableView

For an iPad project I implemented an NSBrowser-like interface wich supports a dynamic number of columns. Each column is represented by an UITableView.
When adding or removing columns, I'm using UIView's animateWithDuration:animations: to change the width of the UITableView.
The problem I'm running into:
The system adds an unwanted frame size animation for the imageView of each table view cell, which resizes to imageView from very large to it's initial size. This looks really awkward, so I'm looking for ways to get rid off it - while keeping the animated frame size change for the enclosing tableViews.
Any ideas what I might be doing wrong?
I posted a sample project demonstration the issue here:
https://github.com/iljaiwas/TableViewFrameTest
Here is where I setup the cell:
https://github.com/iljaiwas/TableViewFrameTest/blob/master/TableViewFrameTest/TestTableViewController.m#L61
Here is where I trigger the animation:
https://github.com/iljaiwas/TableViewFrameTest/blob/master/TableViewFrameTest/TestViewController.m#L46
I was having the same issue and found this link (http://www.objc.io/issue-5/iOS7-hidden-gems-and-workarounds.html) which has a section on Blocking Animations.
To get your example working add the following at the top of TestTableViewController.m after the import statement:
#interface MyTableViewCell : UITableViewCell
#end
#implementation MyTableViewCell
- (void) layoutSubviews {
[UIView performWithoutAnimation:^{
[super layoutSubviews];
}];}
#end
Then change the following line in viewDidLoad to use MyTableViewCell:
[self.tableView registerClass:[MyTableViewCell class] forCellReuseIdentifier:#"Cell"];
Now run the example and you will no longer have your unwanted image animation.
this hepled to me:
set UIImageView ** (in TableView cell) **contentMode to aspect Fit.
do not know why, but works for me.
I was having a similar issue with the (pretty large) image shrinking down from its original size when the editing animation fired to show the delete button. The picture would fly across my screen as it shrank. Pretty crazy. Anyway, I fixed it by using this category on UIImage to resize the image before putting it in the UIImageView:
- (UIImage *)imageWithImage:(UIImage *)image scaledToSize:(CGSize)scaledSize {
UIGraphicsBeginImageContext(scaledSize);
[image drawInRect:CGRectMake(0, 0, scaledSize.width, scaledSize.height)];
UIImage *scaledImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return scaledImage;
}
Edit: here's another category that does a much better job of scaling down the image, depending on how good you need it to look. I originally found these on another SO question a while back but can't seem to find it now to link to, but I used them as a starting point and this one was especially helpful:
- (UIImage *)imageScaledToFitRect:(CGRect)rect {
// compute scale factor for imageView
CGFloat widthScaleFactor = CGRectGetWidth(rect) / self.size.width;
CGFloat heightScaleFactor = CGRectGetHeight(rect) / self.size.height;
NSLog(#"Rect width: %f, Image width: %f, WidthFactor: %f", rect.size.width, self.size.width, widthScaleFactor);
NSLog(#"Rect height: %f, Image height: %f, HeightFactor: %f", rect.size.height, self.size.height, heightScaleFactor);
CGFloat imageViewXOrigin = 0;
CGFloat imageViewYOrigin = 0;
CGFloat imageViewWidth = 0;
CGFloat imageViewHeight = 0;
// if image is narrow and tall, scale to width and align vertically to the top
if (widthScaleFactor > heightScaleFactor) {
imageViewWidth = self.size.width * widthScaleFactor;
imageViewHeight = self.size.height * widthScaleFactor;
}
// else if image is wide and short, scale to height and align horizontally centered
else {
imageViewWidth = self.size.width * heightScaleFactor;
imageViewHeight = self.size.height * heightScaleFactor;
imageViewXOrigin = - (imageViewWidth - CGRectGetWidth(rect))/2;
}
return [self imageScaledToRect:CGRectMake(imageViewXOrigin, imageViewYOrigin, imageViewWidth, imageViewHeight)];
}
Hope this can help someone else.

custom map annotation callout - how to control width

I've successfully implemented the custom map annotation callout code from the asynchrony blog post .
(When user taps a map pin, I show a customized image instead of the standard callout view).
The only remaining problem is that the callout occupies the entire width of the view, and the app would look much better if the width corresponded to the image I'm using.
I have subclassed MKAnnotationView, and when I set it's contentWidth to the width of the image, the triangle does not always point back to the pin, or the image is not even inside it's wrapper view.
Any help or suggestions would be great.
Thanks.
I ran into a similar problem when implementing the CalloutMapAnnotationView for the iPad. Basically I didn't want the iPad version to take the full width of the mapView.
In the prepareFrameSize method set your width:
- (void)prepareFrameSize {
// ...
// changing frame x/y origins here does nothing
frame.size = CGSizeMake(320.0f, height);
self.frame = frame;
}
Next you'll have to calculate the xOffset based off the parentAnnotationView:
- (void)prepareOffset {
// Base x calculations from center of parent view
CGPoint parentOrigin = [self.mapView convertPoint:self.parentAnnotationView.center
fromView:self.parentAnnotationView.superview];
CGFloat xOffset = 0;
CGFloat mapWidth = self.mapView.bounds.size.width;
CGFloat halfWidth = mapWidth / 2;
CGFloat x = parentOrigin.x + (320.0f / 2);
if( parentOrigin.x < halfWidth && x < 0 ) // left half of map
xOffset = -x;
else if( parentOrigin.x > halfWidth && x > mapWidth ) // right half of map
xOffset = -( x - mapWidth);
// yOffset calculation ...
}
Now in drawRect:(CGRect)rect before the callout bubble is drawn:
- (void)drawRect:(CGRect)rect {
// ...
// Calculate the carat lcation in the frame
if( self.centerOffset.x == 0.0f )
parentX = 320.0f / 2;
else if( self.centerOffset.x < 0.0f )
parentX = (320.0f / 2) + -self.centerOffset.x;
//...
}
Hope this helps put you on the right track.

Resources