Evenly cropping of an image from both side using imagemagick .net - imagemagick

I am using image magick for .net to cropping and resizing the images. But the problem with the library is that it only crop the bottom of the image. Isn't there any way by means which we can crop it evenly from both up and down or left and right?
Edited question :
MagickGeometry size = new MagickGeometry(width, height);
size.IgnoreAspectRatio = maintainAspectRatio;
imgStream.Crop(size);

Crop will always use the specified width and height in Magick.NET/ImageMagick so there is no need to set size.IgnoreAspectRatio. If you want to cut out a specific area in the center of your image you should use another overload of Crop that also has a Gravity as an argument:
imgStream.Crop(width, height, Gravity.Center);

If the size variable is an instance of MagickGeometry, than there should be an X & Y offset property. I'm not familiar with .net, but I would imagine it would be something like...
MagickGeometry size = new MagickGeometry(width, height);
size.IgnoreAspectRatio = maintainAspectRatio;
// Adjust geometry offset to center of image (same as `-gravity Center`)
size.Y = imgStream.Height / 2 - height / 2;
size.X = imgStream.Width / 2 - width / 2;
imgStream.Crop(size);

Related

swift get image in aspect fill from original image [duplicate]

This question already has answers here:
How to crop a UIImageView to a new UIImage in 'aspect fill' mode?
(2 answers)
Closed 4 years ago.
The problem I am facing is that the image taken from the camera is larger then the one shown in the live view. I have the camera view setup as Aspect Fill.
So the image that I get from the camera is about 4000x3000 and the view that shows the live feed from the camera is 375x800 (fullscreen iPhoneX size) so how do I transform/cut out part of the image from the image gotten from the camera to be the same as the one shown in the live view, so I can further manipulate the image (draw over it).
As far as I understand the Aspect Fill property clips the image that cannon't be shown in the view. But that clip does not happen on X = 0 and y = 0 it happens somewhere in the middle of the image. So how do i get that X and Y on the original image so that i can crop out exactly that part out.
I hope I explained well enough.
EDIT:
To give more context and some code snipets to make it easier to understand the issue.
Setting up my camera with the .resizeAspectFill gravity.
cameraPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
cameraPreviewLayer?.videoGravity = AVLayerVideoGravity.resizeAspectFill
cameraPreviewLayer?.connection?.videoOrientation = AVCaptureVideoOrientation.portrait
cameraPreviewLayer?.frame = self.captureView.frame
self.captureView.layer.addSublayer(cameraPreviewLayer!)
which is displayed in the live view (captureView) that has the size of
375x818 (width: 375 and height: 818).
Then I get the image from that camera on button click and the size of that image is:
3024x4032 (width: 3024 and height: 4032)
So what i want to do is crop the image from the camera to be the same as the one in the live view (captureView) that is set to AspectFill type.
As you already state, content mode option Aspect fill tries to fill up the live view and you are also right that it crops some rectangle from center (cropping top-bottom or left-right depending upon the image size and the image view size)
For generic solution there are two possible case
The image needed to be cropped along the height to fit the image view (proportional drawing height is smaller)
The image needed to be cropped along the width to fit the image view (proportional drawing width is smaller)
Considering your size notation is 4000x3000 (height = 4000, width = 3000 a portrait image) and your drawing canvas size is 375X800 (height = 375, width = 800), then your cropping would be height wise while setting the content mode Aspect Fill.
So cropping will be done from X=0 but the Y would be somewhat positive. So lets calculate the Y
let propotionalHeight = 4000 / 3000 * 800
let allowedHeight = 375
let topBottomCroppedHeight = proportionalHeight - allowedHeight
let croppedYPosition = topBottomCroppedHeight / 2
So here you got your Y value. and the height would be the height of the canvas / live view where you are rendering. Please replace these values with your variables.
If you are interested in how all the contentMode works can dive into here. All the contentMode supported by UIImageView is simulated here.
Happy coding.
UPDATE
one thing i forgot to mention that, this calculated croppedYPosition is for smaller proportion image. If you want to use this value for the original 4000X3000 image you have to scale this up for the original value as following
let originalYPosition = (croppedYPosition / 375) * 4000
Use originalYPosition to crop from the original image of size 4000X3000.

How do I change my image width to be twice the size of the screen's width?

I want my image's height to be the height of the display and the width of two screen widths, I tried this way, but the image width doesn't change.
length = display.actualContentWidth * 2
local image = display.newImage("icon.png",display.contentCenterX,display.contentCenterY)
image.width = length
image.height = display.actualContentHeight
According to the manual properties width and height give the original image size. So changing their values will most likely not resize the image.
You can use the scale function to resize the display object to the desired dimension once you've calculated the scale factors.

How do I change the aspect ratio of the camera in Xamarin iOS?

First of all, my app is written using mostly in Xamarin Forms, but uses a CustomRenderer for the camera. Not sure if that will affect anything.
My problem currently is that I need to change the aspect ratio of the camera from 16:9 to something custom, if this is even possible.
I have tried changing the width and height independently, but, for example, if I were to change the height to something significantly higher, all it does is expand the width and height at the same rate until it 'hits the side of the box' that the camera is contained in, so it results in the width being correct, but the height I specified in code is a lot higher than the actual height of the camera.
var videoPreviewLayer = new AVCaptureVideoPreviewLayer(captureSession)
{
// 16 / 9 = 1.77777778
Frame = new CGRect(-15, -45, size, size * 1.77777778) /*LiveCameraStream.Bounds*/,
BackgroundColor = new CGColor(0, 255, 0) //Green
};
viewLayer.AddSublayer(videoPreviewLayer);
viewLayer.Bounds = new CGRect(4, -50, 100, 50);
viewLayer.BackgroundColor = new CGColor(255, 0, 0); //Red
//Background colours included just for dev purposes to distinguish between layers
Whenever I set the height to be width * 1.777... it results in a perfect 16:9 aspect ratio, which unfortunately is not what I need. The image below shows how it is currently looking, with ideally the camera taking up all of the red area.
So my question, is how do I change the aspect ratio of this camera, if it is even possible?

How to use scanCrop property on a custom sized ZBarReaderView with an overlay

I have a ZBarReaderView embedded in a viewController and I would like to limit the area of the scan to a square in the middle of the view. I have set the resolution of the camera to 1280x720. Assuming the device is in Portrait Mode, I calculate the normalized coordinates only using the cameraViewBounds, which is the readerView's bounds and the overlayViewFrame which is the orange box's frame, as seen in this screenshot - http://i.imgur.com/xzUDHIh.png . Here is what I have so far:
self.cameraView.session.sessionPreset = AVCaptureSessionPreset1280x720;
//Set CameraView's Frame to fit the SideController's Width
CGPoint origin = self.cameraView.frame.origin;
float cameraWidth = self.view.frame.size.width;
self.cameraView.frame = CGRectMake(origin.x, origin.y, cameraWidth, cameraWidth);
//Set CameraView's Cropped Scanning Rect
CGFloat x,y,width,height;
CGRect cameraViewBounds = self.cameraView.bounds;
CGRect overlayViewFrame = self.overlay.frame;
y = overlayViewFrame.origin.y / cameraViewBounds.size.height;
x = overlayViewFrame.origin.x / cameraViewBounds.size.width;
height = overlayViewFrame.size.height / cameraViewBounds.size.height;
width = overlayViewFrame.size.width / cameraViewBounds.size.width;
self.cameraView.scanCrop = CGRectMake(x, y, width, height);
NSLog(#"\n\nScan Crop:\n%#", NSStringFromCGRect(self.cameraView.scanCrop));
As you can see in the screenshot, the blue box is the scanCrop rect, what I want to be able to do is have that blue box match the orange box. Do I need to factor in the resolution of the image or the image size when calculating the normalized values for the scanCrop?
I cannot figure out what spadix is explaining in this comment on sourceforge:
"So, assuming your sample rectangle represents screen points in portrait orientation and using the default 640x480 camera resolution, your scanCrop rectangle would be {75/w, 38/320., 128/w, 244/320.}, where w=480/(320*640.) is the major dimension of the image in screen points." and here:
"assuming your coordinates refer to points in portrait orientation, using the default camera image size, located at the default location and taking into account that the camera image is rotated:
scanCrop = CGRectMake(50/426., 1-(20+250)/320., 150/426., 250/320.)"
He uses the values 426 and 320 which I am assuming have to do with the image size but in one of his comments he mentions that the resolution is 640x480. How do I factor in the image size to calculate the correct rect for scanCrop?

Scaling sprites (not textures) for target viewport size/device in MonoGame

When you have to display a series of visual components (sprites) within the context of a game each taking a literal height and width that needs to be relative to the height & width of the Viewport (not necessarily aspect ratio) of the target device:
Is there a scaling class to help come up with scaling ratio in a dynamic fashion based on current device viewport size?
Will I need to roll my own scaling ratio algorithm?
Any cross platform issues I should be aware of?
This is not a question relating to the loading of assets based on target device nor is it a question of how to perform the scaling of the sprite (which is described here: http://msdn.microsoft.com/en-us/library/bb194913.aspx), rather a question of how to determine the scale of sprites based on view port size.
You can always create your own implementation of scaling.
For example, the default target viewport dimensions are:
const int defaultWidth = 1280, defaultHeight = 720;
And your current screen dimensions are 800×600, which gives you a (let's use a Vector2 instead of two floats):
int currentWidth = GraphicsDevice.Viewport.Width,
currentHeight = GraphicsDevice.Viewport.Height;
Vector2 scale = new Vector2(currentWidth / defaultWidth,
currentHeight / defaultHeight);
This gives you a {0.625; 0.83333}. You can now use this in a handy SpriteBatch.Draw() overload that takes a Vector2 scaling variable:
public void Draw (
Texture2D texture,
Vector2 position,
Nullable<Rectangle> sourceRectangle,
Color color,
float rotation,
Vector2 origin,
Vector2 scale,
SpriteEffects effects,
float layerDepth
)
Alternatively, you can draw all your stuff to a RenderTarget2D and copy the resulting image from there to a stretched texture on the main screen, but that will still require the above SpriteBatch.Draw() overload, though it might save you time if you have lots of draw calls.
Another Option to generate the scale would be to leverage:
var scaleMatrix = Matrix.CreateScale(
(float)GraphicsDevice.Viewport.Width / View.Width,
(float)GraphicsDevice.Viewport.Width / View.Width, 1f);
http://msdn.microsoft.com/en-gb/library/bb195692.aspx.
But this did not meet my needs, as I would then have to roll my own transform to map touch input location to the 'transformed' sprites (which respond to user touch input by knowing their own position and size).
In the end I used a percentage based approach.
I basically got the viewport height and width...
GraphicsDevice.Viewport.Width
GraphicsDevice.Viewport.Height
...then calculated the Height and Width of my sprites (Note: "as mentioned in question they take a literal height and width") based on their relative size to the screen myself using percentages.
//I want the buttons height and width to be 20% of the viewport
var x, y = GraphicsDevice.Viewport.Width * 0.2f; //20% of screen width
var btnsize = new Vector(x,y);
var button = new GameButton(btnsize);
Then once I have the size of the button I am able to calculate the position on the screen to render the button based of the size of the button and the available viewport size, against working in relative position based in percentages.

Resources