I want to create a crop feature in swift. It will display one giant image view with the image to be cropped. Then on top of it another image view with a solid boarder and transparent background which is roughly 1/3 of the giant image view.
The user will be allowed to drag that overlay image view around and portion it on the part of the giant image view they want to crop.
Then when the click crop button I want to grab only the portion of the underlying giant image view that the overlay image view is covering. I'm not quite sure how to do that.
Here is what I've tried so far and all it does it crop the top portion of the giant image view to the bounds of the over lay image view. But how can I change it so that it does the crop with the proper bounds as it already does, but with the correct portion of the giant image view. Right now it only crops the top portion of it for some reason.
#IBOutlet weak var overlayImage: UIImageView!
#IBOutlet weak var imageView: UIImageView!
var lastLocation = CGPoint()
override func viewDidLoad() {
super.viewDidLoad()
//drawCustomImage simply creates a transparent background and dashed bordered box to display on the top 1/3 of the giant image view
let image = drawCustomImage()
overlayImage.image = image
}
override func touchesBegan(touches: Set<UITouch>, withEvent event: UIEvent?) {
if let touch = touches.first{
self.lastLocation = touch.locationInView(self.view)
}
}
override func touchesMoved(touches: Set<UITouch>, withEvent event: UIEvent?) {
if let touch = touches.first{
let location = touch.locationInView(self.view)
//don't let the user move the overlay image view passed the bounds of the giant image view
if(((location.y - self.lastLocation.y) + self.overlayImage.center.y) < (self.imageView.frame.maxY - (self.overlayImage.frame.height / 3)) && ((location.y - self.lastLocation.y) + self.overlayImage.center.y) > (self.imageView.frame.minY) + (self.overlayImage.frame.height / 3)){
self.overlayImage.center = CGPoint(x: self.overlayImage.center.x, y: (location.y - self.lastLocation.y) + self.overlayImage.center.y)
}
}
}
#IBAction func cropBtn(sender: AnyObject) {
let imageRef: CGImageRef = CGImageCreateWithImageInRect(imageView.image!.CGImage, overlayImage.bounds)!
imageView.bounds = overlayImage.bounds
imageView.image = UIImage(CGImage: imageRef)
overlayImage.hidden = true
}
Related
I'm having trouble getting hitTest(_:with:) to traverse a UIImageView when it's added programmatically.
In IB I make a square area and give it a tag of 1. Within that square I insert a UIImageView (tag=2) (orange/white in screenshot below). In the ViewController I programmatically create another square(tag=3). Then add a UIImageView(tag=4) within that (red/green in screenshot below). The following picture shows this:
I'm using touchesBegan(_:with:) to perform hitTest. When I touch the orange I get a printout of a view with tag 1 (expected). If I touch the white square I get printout with tag 2 (expected). These were created in IB.
When I touch the red square I get tag 3 (expected). When I touch green I get tag 3! (NOT expected). I've looked at the runtime hierarchy and don't see any difference in structure between IB & programmatic. It appears that hitTest(_:with:) doesn't recognize/traverse/include/detect the programmatic UIImageView. What's the difference?
Here's my ViewController code:
import UIKit
class ViewController: UIViewController {
#IBOutlet var gameView: UIView!
#IBOutlet var boardView: UIImageView!
var g2 : UIView!
var b2 : UIImageView!
override func viewDidLoad() {
super.viewDidLoad()
g2 = UIView(frame: CGRect(x: 100, y: 50, width: 240, height: 240))
g2.backgroundColor = .red
g2.tag = 3
view.addSubview(g2)
b2 = UIImageView(frame: CGRect(x: 75, y: 75, width: 80, height: 80))
b2.backgroundColor = .green
b2.tag = 4
g2.addSubview(b2)
}
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
if event!.type != .touches { return }
if let first = touches.first {
let loc = first.location(in: view)
if let v = view.hitTest(loc, with: nil) {
print("\(v)")
}
}
}
}
I've changed the programmatic UIImageView to a UIView and that IS detected by hitTest as expected.
UIImageView has isUserInteractionEnabled set to false by default, e.g. it doesn't receive any touches.
This should be true for a storyboard too, probably you have enabled User Interactions Enabled checkmark.
Most of other views have this property set to true by default, but if you wanna interact with UIImageView touches, you need to enable it explicitly.
b2.isUserInteractionEnabled = true
First of all I have checked almost every places over the internet but I didn't get any solution about this topic.
In my cases I have multiple UIView objects inside a superview or you can say a canvas where I am drawing this views.
All this views are attached with pan gesture so they can be moved inside anywhere of their superview.
Some of this views can be rotated using either rotation gesture or CGAffineTransformRotate.
Whenever any of the view will be outside of the main view then it will be deleted.
Now following are my code.
#IBOutlet weak var mainView: UIView!
var newViewToAdd = UIView()
newViewToAdd.layer.masksToBounds = true
var transForm = CGAffineTransformIdentity
transForm = CGAffineTransformScale(transForm, 0.8, 1)
transForm = CGAffineTransformRotate(transForm, CGFloat(M_PI_4)) //Making the transformation
newViewToAdd.layer.shouldRasterize = true //Making the view edges smooth after applying transforamtion
newViewToAdd.transform = transForm
self.mainView.addSubview(newViewToAdd) //Adding the view to the main view.
Now in case the gesture recognizer its inside the custom UIView Class -
var lastLocation: CGPoint = CGPointMake(0, 0)
override func touchesBegan(touches: Set<UITouch>, withEvent event: UIEvent?) {
self.superview?.bringSubviewToFront(self)
lastLocation = self.center //Getting the last center point of the view on first touch.
}
func detectPan(recognizer: UIPanGestureRecognizer){
let translation = recognizer.translationInView(self.superview!) //Making the translation
self.center = CGPointMake(lastLocation.x + translation.x, lastLocation.y + translation.y) //Updating the center point.
switch(recognizer.state){
case .Began:
break
case .Changed:
//MARK: - Checking The view is outside of the Superview or not
if (!CGRectEqualToRect(CGRectIntersection(self.superview!.bounds, self.frame), self.frame)) //if its true then view background color will be changed else background will be replaced.
{
self.backgroundColor = outsideTheViewColor
var imageViewBin : UIImageView
imageViewBin = UIImageView(frame:CGRectMake(0, 0, 20, 25));
imageViewBin.image = UIImage(named:"GarbageBin")
imageViewBin.center = CGPointMake(self.frame.width/2, self.frame.height/2)
addSubview(imageViewBin)
}else{
for subViews in self.subviews{
if subViews.isKindOfClass(UIImageView){
subViews.removeFromSuperview()
}
self.backgroundColor = deSelectedColorForTable
}
}
case .Ended:
if (!CGRectEqualToRect(CGRectIntersection(self.superview!.bounds, self.frame), self.frame)) //If its true then the view will be deleted.
{
self.removeFromSuperview()
}
default: break
}
}
The main problem is if the view is not rotated or transformed then all the "CGRectIntersection" inside the .Changed/.Ended case is working fine as expected but if the view is rotated or transformed then "CGRectIntersection" always becoming true even the view is inside the "mainView" and its removing from the mainview/superview.
Please help about my mistake.
Thanks in advance.
Frame of the view gets updated after applying transform. Following code ensures that it is inside the its superviews bounds.
if (CGRectContainsRect(self.superview!.bounds, self.frame))
{
//view is inside of the Superview
}
else
{
//view is outside of the Superview
}
I am trying to implement simple Image manipulation in Swift, I want to draw above the image of UIImageView.
So The interface will be as it is in the following picture:
when clicking on the Emoji downside, i want to drag it and drop it on the imageview, or only clicking will move it to the uiimageview,
Then it can be moved around inside the uiimageview and save the whole image to galery.
I could not find useful source about this on google.
so Where to start with this?
If the emoji is a UIImageView, you can implement touchesBegan(touches: Set<UITouch>, withEvent event: UIEvent?). The following is where mainImageView is the view in which you want to add the emoji.
override func touchesBegan(touches: Set<UITouch>, withEvent event: UIEvent?) {
let touch = event?.allTouches()?.first
let touchLocation = touch?.locationInView(self.view)
if(CGRectContainsPoint(emojiImageView?.frame, touchLocation)) {
//your emoji imageView has been selected.
addImageviewToMainImageView(emojiImageView)
}
}
func addImageviewToMainImageview(imageView: UIImageView!) {
imageView.removeFromSuperView()
let origin = CGPoint(mainImageView.center.x , mainImageView.center.y)
let frame = CGRect(origin.x, origin.y, imageView.frame.size.width, imageView.frame.size.height)
imageView.frame = frame
mainImageView.addSubview(imageView)
}
If you want to move the emojiImageView around within the mainImageView, you should subclass UIView to implement touchesBegan(touches: Set<UITouch>, withEvent event: UIEvent?) and touchesMoved(touches: Set<UITouch>, withEvent event: UIEvent?) in a similar fashion.
1) Detect the touch with touchesBegan, determine if one of the mainImageViews subviews has been touched. Set a reference to this subview.
2) If a subview has been touched, touchesMoved will use that reference to determine the new location of the subview:
let touch = event?.allTouches()?.first
let touchLocation = touch?.locationInView(self)
selectedSubview.center = touchLocation
Use following code to combine 2 images, I am not adding code for drag and drop thing.
UIImage *mainImage = firstImage;
UIImage *smallImage = secondImage;
CGPoint renderingPoint = CGPointMake(50,50);
UIImage *outputImage = nil;
if (smallImage.size.width > mainImage.size.width || smallImage.size.height > mainImage.size.height)
{
return smallImage;
}
UIGraphicsBeginImageContext(firstImage.size);
[firstImage drawAtPoint:CGPointMake(0,0)];
[secondImage drawAtPoint:renderingPoint];
outputImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
The outputImage here is output of 2 combined images. The renderingPoint is the Point at which you want to start drawing other image.
Just Add touches methods and get the point at which Touch ends, keeping that point as center point and by using smallImage's width, you can calculate its rendering point.
Also in keep in mind that here renderingPoint is with respect to original size of image, not the size of imageview, so make calculation accordingly to scale up or down to imageview.
Say for example I have a white background, then on every time the view is touched, a blue box sized (60x60) is added to the screen. If i keep tapping all over the screen until the entire view becomes blue, how can I use code to notify the user using for example an alert controller saying that the view has now changed color completely.
if that made sense to anyone, I would appreciate any suggestions :)
As others have stated, the best approach is to hold a reference to a 2D array to correlate with the UI. I would approach this by first:
1) Create a subclass of UIView called overlayView. This will be the view overlay on top of your View Controllers' view. This subclass will override init(frame: CGRect). You may implement a NxM grid on initialization. You will also create and add the subviews as appropriate.
Example:
let boxWidth: CGFloat = 60
let boxHeight: CGFloat = 60
let boxFrame: CGRect = CGRectMake(0, 0, 600, 600)
override init(frame: CGRect) {
super.init(frame: boxFrame)
for i in 1 ... 10 { //row
for j in 1...10 { //column
let xCoordinate: CGFloat = CGFloat(CGFloat(i) * boxWidth)
let yCoordinate: CGFloat = CGFloat(CGFloat(j) * boxHeight)
let boxFrame: CGRect = CGRect(x: xCoordinate, y: yCoordinate, width: boxWidth, height: boxHeight)
var subView: UIView = UIView(frame: boxFrame)
subView.backgroundColor = UIColor.greenColor()
self.addSubview(subView)
}
}
}
2) Override touchesBegan(touches: NSSet, withEvent event: UIEvent) within overlayView. The implementation should look something like this:
override func touchesBegan(touches: NSSet, withEvent event: UIEvent) {
let touch: UITouch = touches.anyObject() as UITouch
let touchLocation: CGPoint = touch.locationInView(self)
for subView in self.subviews {
if(CGRectContainsPoint(subView.frame, touchLocation)) {
subView.removeFromSuperview()
}
}
}
2a) You will also want to create a function to notify your 2D array that this subview as been removed. This will be called from within your if statement. In addition, this function will detect if the array is empty (all values set to false)
OR
2b) if you do not need a reference to the model (you do not care which boxes are tapped only that they're all gone), you can skip 2a and check if self.subviews.count == 0. If this is the case, you do not need a 2D array at all. If this condition passes, you can present an alert to the user as needed.
3) Create an overlayView in your main View Controller and add it as a subview.
My goal is to overlay two images (the first a photo from camera roll, the second a PNG of a cartoon ghost).
I've gotten far enough that I'm passing in a camera roll image and a selected ghost image to a view controller. But where I'm stuck is how to layer these images in a useable way.
I can flow in the original photo a couple of ways (either by starting with an image view or by programmatically creating one in a view), and I can add the ghost on top of it (I've mostly been doing this programmatically).
But I can only control the ghost's size and placement manually. I'd like it to come in centered on the original image and match either its height or width, depending on which is smaller as it can be horizontal or vertical.
After an evening of searching, I've come up blank. But surely there's got to be a way to grab the image view's coordinates and make that calculation, right?
Here's what I've got:
import UIKit
class TwoLayerViewController: UIViewController {
#IBOutlet weak var bottomLayerImage: UIImageView!
var originalPhoto: UIImage?
var chosenGhostPhoto: UIImage?
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view.
bottomLayerImage.image = originalPhoto
bottomLayerImage.contentMode = UIViewContentMode.ScaleAspectFit
var ghostView = UIImageView(frame:CGRectMake(bottomLayerImage.frame.origin.x, bottomLayerImage.frame.origin.y, 100, 100))
ghostView.image = chosenGhostPhoto
ghostView.backgroundColor = UIColor.grayColor()
bottomLayerImage.addSubview(ghostView)
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
// Dispose of any resources that can be recreated.
}
}
Try putting the top image not in an ImageView, but in a sublayer over the other one. Like this:
class YourImageView: UIImageView{
let containerLayer = CALayer()
func drawImageOnTop(img: UIImage){
let piclayer = CALayer()
let sz = originalPhoto?.size ?? CGSize(width: 0, height: 0)
piclayer.frame = CGRect(origin: self.layer.contentsRect.origin, size: sz)
piclayer.position = CGPoint(x: sz.width/2, y: sz.height/2)
piclayer.contentsGravity = .resizeAspect
piclayer.contents = img.cgImage
containerLayer.addSublayer(piclayer)
}