How can i cut a rectangle out of a bitmap - ios

I want to cut a rectangle from x= 100, y = 100, width = 50 , height = 50 from an image. and I loaded it from the web into a UIImageView. First of all, is it possible to manipulate a loaded image from remote URL? If so, what class/method is useful for this ?

Related

Removing statusbar from screenshot on iOS

Im trying to remove the top part of an image by cropping, but the result is unexpected.
The code used:
extension UIImage {
class func removeStatusbarFromScreenshot(_ screenshot:UIImage) -> UIImage {
let statusBarHeight = 44.0
let newHeight = screenshot.size.height - statusBarHeight
let newSize = CGSize(width: screenshot.size.width, height: newHeight)
let newOrigin = CGPoint(x: 0, y: statusBarHeight)
let imageRef:CGImage = screenshot.cgImage!.cropping(to: CGRect(origin: newOrigin, size: newSize))!
let cropped:UIImage = UIImage(cgImage:imageRef)
return cropped
}
}
My logic is that I need to make the image smaller in heigh by 44px and move the origin y by 44px, but it ends up only creating an image much smaller of the top left corner.
The only way that I get it to work as expected is by multiplying the width by 2 and height by 2.5 in newSize, but that also double the size of the image produced..
Which anyways doesnt make much sense.. can someone help make it work without using magic values?
There are two main problems with what you're doing:
A UIImage has a scale (usually tied to resolution of your device's screen), but a CGImage does not.
Different devices have different "status bar" heights. In general, what you want to cut off from the top is not the status bar but the safe area. The top of the safe area is where your content starts.
Because of this:
You are wrong to talk about 44 px. There are no pixels here. Pixels are physical atomic illuminations on your screen. In code, there are points. Points are independent of the scale (and the scale is the multiplier between points and pixels).
You are wrong to talk about the number 44 itself as if it were hard-coded. You should get the top of the safe area instead.
By crossing into the CGImage world without taking scale into account, you lose the scale information, because CGImage knows nothing of scale.
By crossing back into the UIImage world without taking scale into account, you end up with a UIImage with a resolution of 1, which may not be the resolution of the original UIImage.
The simplest solution is not to do any of what you are doing. First, get the height of the safe area; call it h. Then just draw the snapshot image into a graphics image context that is the same scale as your image (which, if you play your cards right, it will be automatically), but is h points shorter than the height of your image — and draw it with its y origin at -h, thus cutting off the safe area. Extract the resulting image and you're all set.
Example! This code comes a view controller. First, I'll take a screenshot of my own device's current screen (this view controller's view) as my app runs:
let renderer = UIGraphicsImageRenderer(size: view.bounds.size)
let screenshot = renderer.image { context in
view.layer.render(in: context.cgContext)
}
Now, I'll cut the safe area off the top of that screenshot:
let h = view.safeAreaInsets.top
let size = screenshot.size
let r = UIGraphicsImageRenderer(
size: .init(width: size.width, height: size.height - h)
)
let result = r.image { _ in
screenshot.draw(at: .init(x: 0, y: -h))
}
Experimentation will confirm that this works perfectly on every device, regardless of whether it has a bezel and regardless of its screen resolution: the top of the resulting image, result, is the top of your actual content.

Evenly cropping of an image from both side using imagemagick .net

I am using image magick for .net to cropping and resizing the images. But the problem with the library is that it only crop the bottom of the image. Isn't there any way by means which we can crop it evenly from both up and down or left and right?
Edited question :
MagickGeometry size = new MagickGeometry(width, height);
size.IgnoreAspectRatio = maintainAspectRatio;
imgStream.Crop(size);
Crop will always use the specified width and height in Magick.NET/ImageMagick so there is no need to set size.IgnoreAspectRatio. If you want to cut out a specific area in the center of your image you should use another overload of Crop that also has a Gravity as an argument:
imgStream.Crop(width, height, Gravity.Center);
If the size variable is an instance of MagickGeometry, than there should be an X & Y offset property. I'm not familiar with .net, but I would imagine it would be something like...
MagickGeometry size = new MagickGeometry(width, height);
size.IgnoreAspectRatio = maintainAspectRatio;
// Adjust geometry offset to center of image (same as `-gravity Center`)
size.Y = imgStream.Height / 2 - height / 2;
size.X = imgStream.Width / 2 - width / 2;
imgStream.Crop(size);

Get x,y from Top, Left, Width and Height for Corona Objects

Good day
I just started using Corona and I'm kinda confused with this x and y properties. Is it possible to perhaps get the x and y values using Top, Left, Width and Height properties if these are provided? For example, I want an an object to be at Left=10, Top=0, Width=40 and Height=40. Can someone please advise how I can do this, this could be for images, text, textfield etc
Of course. There are several methods to do this.
Example 1:
local myImage = display.newImageRect( "homeBg.png", 40, 40)
myImage.anchorX = 0; myImage.anchorY = 0
myImage.x = 10 -- Left gap
myImage.y = 0 -- Top gap
localGroup:insert(myImage)
Here, setting the anchor points to (0,0) will make the reference point of your images' geometric center to its top left corner.
Example 2:
local myImage = display.newImageRect( "homeBg.png", 40, 40)
myImage.x = (myImage.contentWidth/2) + 10
myImage.y = (myImage.contentHeight/2)
localGroup:insert(myImage)
Here, the center-X position of your image is calculated by adding Left gap to the image's half width itself. And the center-Y position is calculated by adding Top gap to the image's half height
You can position the objects with any of such methods. If you are a beginner in corona, then the following topics will be useful for you to get more knowledge about Displaying Objects with specific size, position, etc.
Corona SDK : display.newImageRect()
Tutorial: Anchor Points in Graphics 2.0
Corona uses like a Cartesian Coordinate System But the (0,0) is on the TOP LEFT you can view more here:
https://docs.coronalabs.com/guide/graphics/group.html#coordinates
BUT: You can get the screen corners based on the image width and height using this codes:
NOTE THAT YOU SHOULD YOU SHOULD CHANGE IT WITH YOUR IMAGE
local image = display.newImageRect(“images/yourImage.png”, width, height)
--TOP:
image.y = math.floor(display.screenOriginY + image*0.5)
--BOTTOM:
image.y = math.floor(screenH - display.screenOriginY) - image.height*.5
--LEFT:
image.x = (screenOffsetW*.5) + image*.5
--RIGHT:
image.x = math.floor(screenW - screenOffsetW*.5) - image.width*.5
Corona SDK display objects have attributes that can be read or set:
X = myObject.x -- gets the current center (by default) of myObject
width = myObject.width
You can set these values too....
myObject.x = 100 -- centers the object at 100px left of the content area.
By default Corona SDK's are based on their center, unless you change it's anchor point:
myObject.anchorX = 0
myObject.anchorY = 0
myObject.x = 100
myObject.y = 100
by setting the anchor's to 0, then .x and .y refer to the top left of the object.

How to use scanCrop property on a custom sized ZBarReaderView with an overlay

I have a ZBarReaderView embedded in a viewController and I would like to limit the area of the scan to a square in the middle of the view. I have set the resolution of the camera to 1280x720. Assuming the device is in Portrait Mode, I calculate the normalized coordinates only using the cameraViewBounds, which is the readerView's bounds and the overlayViewFrame which is the orange box's frame, as seen in this screenshot - http://i.imgur.com/xzUDHIh.png . Here is what I have so far:
self.cameraView.session.sessionPreset = AVCaptureSessionPreset1280x720;
//Set CameraView's Frame to fit the SideController's Width
CGPoint origin = self.cameraView.frame.origin;
float cameraWidth = self.view.frame.size.width;
self.cameraView.frame = CGRectMake(origin.x, origin.y, cameraWidth, cameraWidth);
//Set CameraView's Cropped Scanning Rect
CGFloat x,y,width,height;
CGRect cameraViewBounds = self.cameraView.bounds;
CGRect overlayViewFrame = self.overlay.frame;
y = overlayViewFrame.origin.y / cameraViewBounds.size.height;
x = overlayViewFrame.origin.x / cameraViewBounds.size.width;
height = overlayViewFrame.size.height / cameraViewBounds.size.height;
width = overlayViewFrame.size.width / cameraViewBounds.size.width;
self.cameraView.scanCrop = CGRectMake(x, y, width, height);
NSLog(#"\n\nScan Crop:\n%#", NSStringFromCGRect(self.cameraView.scanCrop));
As you can see in the screenshot, the blue box is the scanCrop rect, what I want to be able to do is have that blue box match the orange box. Do I need to factor in the resolution of the image or the image size when calculating the normalized values for the scanCrop?
I cannot figure out what spadix is explaining in this comment on sourceforge:
"So, assuming your sample rectangle represents screen points in portrait orientation and using the default 640x480 camera resolution, your scanCrop rectangle would be {75/w, 38/320., 128/w, 244/320.}, where w=480/(320*640.) is the major dimension of the image in screen points." and here:
"assuming your coordinates refer to points in portrait orientation, using the default camera image size, located at the default location and taking into account that the camera image is rotated:
scanCrop = CGRectMake(50/426., 1-(20+250)/320., 150/426., 250/320.)"
He uses the values 426 and 320 which I am assuming have to do with the image size but in one of his comments he mentions that the resolution is 640x480. How do I factor in the image size to calculate the correct rect for scanCrop?

iOS Drawing image issue

I'm try to create application where I'm use a UIScrolView and I can draw different jpeg images on the appropriate point for each of this images. It is must looks like a video, for example:
I have a first jpeg image (background), and this image (for example image has coordinates: x = 0, y = 0, and dimentions: width = 1024, height = 768), I send to UIScrolView for showing. Then I get enother jpeg image with their own coordinatates and dimentions(for example x = 10, y = 10, width = 100, height = 100) and I need show this "smal" image over "background".
How I can di that?
Or another way of this issue. Example:
I have a BMP data of first image(bacground). How I can send this BMP data to the UIScrolView for showing?\
P.S. One more question: Which of presented above ways will run faster for processing and presentation of data?
WBR
Maxim Tartachnik

Resources