I am working on an iOS app in Swift 3.0 and I have integrated card scanner using Card.IO for iOS.
I am successfully able to scan the card, but the problem is that width and height of the camera view does not take frames as per the requirements.
It only takes width and height in ratio of 3:4.
I want, the camera should take half of the screen height and full screen width, But is is not taking. When I pass the frame as
cardView = CardIOView(frame: CGRect(x: 0, y: 100, width: screen.width, height: screen.height / 2))
it does not take full screen width.
Is it a bug on SDK side, I have tried everything but no success.
If anyone can help.
Thanks in advance!
My suggestion is: you incorrectly determine the screen size: Swift: Determine iOS Screen size
let screenSize = UIScreen.main.bounds
let screenWidth = screenSize.width
let screenHeight = screenSize.height
Related
This question already has an answer here:
Swift - SpriteKit CGPoint Alignment
(1 answer)
Closed 9 months ago.
I am writing an iOS game in Swift using Spritekit and want to find the screen resolution to properly place my sprites. I found multiple ways on internet, but none of them is giving me a correct resolution that help me to place my sprites.
The one that works fine for an iPhone 13 Pro is the following
screenSize=self.size
screenSize.width /= (UIScreen.main.bounds.height/screenSize.height) / (UIScreen.main.bounds.width/screenSize.width)
let background = SKSpriteNode(imageNamed: "Landscape.jpg")
background.position = CGPoint(x: 0, y: 0)
background.size = screenSize
If I use the recommended UIScreen.main.bounds, this is the outcome on an iPhone:
But on an iPad for example, the dimensions are too big.
Is there a unique way of finding screen resolution on all devices ? Or is there a scene scaling that enters in the equation ?
Try this:
// Get main screen bounds
let screenSize: CGRect = UIScreen.main.bounds
let screenWidth = screenSize.width
let screenHeight = screenSize.height
print("Screen width = \(screenWidth), screen height = \(screenHeight)")
this is output of iPone 13 pro simulator:
Screen width = 390.0, screen height = 844.0
I have a UIScrollView displaying a image. I want to programmatically zoom in to the a rect somewhere near the center (doesn't have to be exact) of the currently visible area. How would I get the coordinates of this rect to use with zoomToRect? Note that this image could be already zoomed in and only showing a fraction of the scrollView content area.
The the X and Y position of that image are relative to the scrollview's contentSize. The area shown on screen is defined by the scrollview's contentOffset.
You then take the position of your scrollview on screen and the position of the selection rectangle on your screen.
Finally you need to do rather simple maths (a few additions/subtractions) for both X and Y using the above values.
Grab UIImageView's frame and call insetBy(dx:dy:):
Returns a rectangle that is smaller or larger than the source
rectangle, with the same center point.
From Apple Documentation
Here's a quick visualisation in a PlayGround:
let blueSquare = UIView(frame: CGRect(x: 0, y: 0, width: 300, height: 300))
let yellowSquare = UIView(frame: blueSquare.frame.insetBy(dx: 100, dy: 100))
blueSquare.backgroundColor = .blue
yellowSquare.backgroundColor = .yellow
blueSquare.addSubview(yellowSquare)
Will result in this:
How can I resize an image based on the height of the mobile device that's in landscape mode? I have a wide image (a ruler) and want to be able to slide the image back and forth. I've tried scaling the image, but I can't seem to get it to work.
func resizeImage(size: CGSize) {
let scaleFactor = imageView.bounds.height / size.height
let newHeight = imageView.bounds.height * scaleFactor
let newWidth = imageView.bounds.width * scaleFactor
var newSize: CGSize
newSize = CGSize(newWidth, newHeight)
imageView.frame = CGRect(origin: imageView.frame.origin, size: newSize)
scrollView.contentSize = imageView.bounds.size
scrollView.autoresizingMask = [.flexibleRightMargin, .flexibleLeftMargin, .flexibleTopMargin, .flexibleRightMargin]
self.scrollView.contentMode = UIViewContentMode.scaleAspectFit;
}
If you are using storyboard then it is easier to do with auto layout constraints. You have to use Equal Height Constraint with the Multiplier. Although I am also a beginner and haven't used this method for landscape mode, but I am assuming it ll work for landscape too.
Follow these steps :-
Suppose the view you are using in your storyboard while creating the view is of iPhone 7. So your screen height will 667.
Now, suppose in this view your image looks perfect with height 200.
So take the the ratio 200/667=0.299
Now set Equal Height Constraint by selecting both views (i.e imageView and superView) using the ratio as multiplier.
You can check for other devices and orientations with an option on bottom View as:
In the end, height constraint of image should look like this. My height ratio is 0.4 (pic here)
I am facing one strange problem, using the below code I am making the UIScrollView to full screen.
CGRect screenBound = [[UIScreen mainScreen] bounds];
CGSize screenSize = screenBound.size;
CGFloat screenWidth = screenSize.width;
CGFloat screenHeight = screenSize.height;
CGRect scrollFrame = CGRectMake(0, 0, screenWidth, screenHeight);
self.imageHostScrollView.frame = scrollFrame;
NSLog(#"Scroll Height: %f, Width: %f",screenHeight,screenWidth);
The problem I am facing when the iPad is in the Portrait mode, the height will be big and width will be small, instead I am getting height smaller and width bigger (same happens in Landscape mode also).
Portrait mode the value I am getting is
Scroll Height: 768.000000, Width: 1024.000000
In Landscape mode the value I am getting is
Scroll Height: 1024.000000, Width 768.000000
Can anyone help me
The problem is that the way you're setting the self.imageHostScrollView.frame is silly. You are effectively hard-coding assumptions about the screen into the frame of a view — two things that are completely unrelated to one another.
Instead, use auto layout to pin the edges of the scroll view to the edges of the window. That way, whatever the window may do from now on — making no assumptions about what that may be — the scroll view will continue to fill it exactly.
I try to create a functionallity like in PicFrame or any other application for creating photo collage in one frame.
I've created two scroll views and two image views in these scroll views for scrolling and zooming the images. It works well.
Then I need to create one square image of the two rectangular images.
var firstImage = UIImage(named: leftImagePath)
var secondImage = UIImage(named: rightImagePath)
var size = CGSize(width: 1080, height: 1080)
UIGraphicsBeginImageContext(size)
let leftImageAreaSize = CGRect(x: 0, y: 0, width: size.width / 2, height: size.height)
firstImage!.drawInRect(leftImageAreaSize)
let rightImageAreaSize = CGRect(x: size.width / 2 + 1 , y: 0, width: size.width / 2, height: size.height)
secondImage!.drawInRect(rightImageAreaSize)
var newImage:UIImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
This code works well, but I need to implement the scroll and zoom values, crop and scale the images using these values before creating one square image.
Can anyone guide me how to do this?
I make PicFrame so I suppose I have some experience here. Although this isn't what I do, and I haven't tried it myself, if you just want a quick image of what you see you could use drawViewHierarchyInRect and capture the screen area.
Otherwise what you want to do is get the CGPoint contentOffset and the CGSize bounds.size from the UIScrollView. Then you modify these by the UIScrollView zoomScale. Make sure your contentSize is the size of the image so that a zoomScale of 1.0 would have the width and height of the original image.
From this you should be able to make a CGRect which is the x, y, width, height of what is visible in your scroll view but translated to the size of the image. Crop the image to this size and then draw it into your final graphics context with your desired CGRect.