how to do image resize without distortion in Blackberry - blackberry

I am downloading an image from the web and applying fixed32 method to resize the image to fit to screen. Resizing the image causes some loss of clarity, but my client wants to display the image without this distortion. How can I do this?

The way I do this is to not resize the image in the phone. I request my server to send me image based on the width*height I need. The server then resizes the image for me. How about you try a similar solution. This would help you by providing a proper resized image, and improve performance of your app.

Use following method for resizing image.
public static Bitmap SizePic (EncodedImage Resizor,int Height, int Width) {
int multH;
int multW;
int currHeight = Resizor.getHeight();
int currWidth = Resizor.getWidth();`enter code here`
multH= Fixed32.div(Fixed32.toFP(currHeight),Fixed32.toFP( Height));
multW = Fixed32.div(Fixed32.toFP(currWidth),Fixed32.toFP(Width));
Resizor = Resizor.scaleImage32(multW,multH);
return Resizor.getBitmap();
}

Related

Shrink image in iphone 7 xamarin ios?

I am working in xamarin.ios. I am capturing a picture with help of camera and trying to upload it on server. But when I am trying to upload it on server I am getting "TargetInvocationException". But when I am running same code on Ipad everything is working fine.
Following is the code :
Camera.TakePicture(this, (obj) =>
{
var photo = obj.ValueForKey(new NSString("UIImagePickerControllerOriginalImage")) as UIImage;
Byte[] myByteArray;
using (NSData imageData = photo.AsJPEG(0.0f))
{
//myByteArray = imageData.ToArray();
myByteArray = new Byte[imageData.Length];
System.Runtime.InteropServices.Marshal.Copy(imageData.Bytes, myByteArray, 0, Convert.ToInt32(imageData.Length));
}
ImageLoaderPopup imageLoader = new ImageLoaderPopup(this, selectedWorkOrder, myByteArray, title);
imageLoader.PopUp(true, delegate { });
});
Does anyone know why I am facing this problem? What am I doing wrong?
You might be having problem with the size of the image returned by the Picker. Images taken with the iPhone 7 are huge in size.
You just need to scale down the original image before uploading it using the Scale method and setting the size you find acceptable.
var smallImage = image.Scale(new CGSize(1280, 720)); //Or the size you need.
UPDATE
All you need is the Scale method mentioned above.
The big advantage of Xamarin being Open Source is that you can look at the internals any time you have a doubt.
https://github.com/xamarin/xamarin-macios/blob/master/src/UIKit/UIImage.cs#L66

How can I use Open GL ES to crop image?

So I face some problems when I deal with image cropping. I am awared of two possible ways: UIGraphicsBeginImageContextWithOptions combined with drawAtPoint:blendMode: methods and CGImageCreateWithImageInRect. They both work but have some serious disadvantages:the first way takes a lot of time(in my case 7 secs approx.) and memory(I receive memory warning)(suppose I crop an image taken with iPhone camera); the second ends up with rotated image so that you need to put a bunch code to defeat this behavior which I don't want. What I want to know is how, for instance, apple's built in edit function of "Photos" app works, or Aviary or any other photo editor. Consider apple's editor(iOS 8), you can rotate image,change cropping rectangle,they also have blurring(!) outside the cropping rect and so on, but when you apply cropping it takes max 8 mb of memory and it happens immediately. How do they do this?
The only thought I have is that they use the potential of GPU(Aviary, for instance). So,if we combine all this questions in one, how can I use Open GL to make cropping be less painful operation? I've never worked with it, so any tuts,links and sources are welcome.Thank you in advance.
As already mentioned this should most likely not be done with openGL but even if...
The problem most people have is getting the rectangle in which the image should be displayed and the solution looks something like this:
- (CGRect)fillSizeForSource:(CGRect)source target:(CGRect)target
{
if(source.size.width/source.size.height > target.size.width/target.size.height)
{
// keep target height and make the width larger
CGSize newSize = CGSizeMake(target.size.height * (source.size.width/source.size.height), target.size.height);
return CGRectMake((target.size.width-newSize.width)*.5f, .0f, newSize.width, newSize.height);
}
else
{
// keep target width and make the height larger
CGSize newSize = CGSizeMake(target.size.width, target.size.width * (source.size.height/source.size.width));
return CGRectMake(.0f, (target.size.height-newSize.height)*.5f, newSize.width, newSize.height);
}
}
- (CGRect)fitSizeForSource:(CGRect)source target:(CGRect)target
{
if(source.size.width/source.size.height < target.size.width/target.size.height)
{
// keep target height and make the width smaller
CGSize newSize = CGSizeMake(target.size.height * (source.size.width/source.size.height), target.size.height);
return CGRectMake((target.size.width-newSize.width)*.5f, .0f, newSize.width, newSize.height);
}
else
{
// keep target width and make the height smaller
CGSize newSize = CGSizeMake(target.size.width, target.size.width * (source.size.height/source.size.width));
return CGRectMake(.0f, (target.size.height-newSize.height)*.5f, newSize.width, newSize.height);
}
}
I did not test this.
Or since you are on iOS simply create an image view with the desired size, add an image to it and then get the screenshot of the image.

Set dimensions for UIImagePickerController "move and scale" cropbox

How does the "move and scale screen" determine dimensions for its cropbox?
Basically I would like to set a fixed width and height for the "CropRect" and let the user move and scale his image to fit in to that box as desired.
Does anyone know how to do this? (Or if it is even possible with the UIImagePickerController)
Thanks!
Not possible with UIImagePickerController unfortunately. The solution I recommend is to disable editing for the image picker and handle it yourself. For instance, I put the image in a scrollable, zoomable image view. On top of the image view is a fixed position "crop guide view" that draws the crop indicator the user sees. Assuming the guide view has properties for the visible rect (the part to keep) and edge widths (the part to discard) you can get the cropping rectangle like so. You can use the UIImage+Resize category to do the actual cropping.
CGRect cropGuide = self.cropGuideView.visibleRect;
UIEdgeInsets edges = self.cropGuideView.edgeWidths;
CGPoint cropGuideOffset = self.cropScrollView.contentOffset;
CGPoint origin = CGPointMake( cropGuideOffset.x + edges.left, cropGuideOffset.y + edges.top );
CGSize size = cropGuide.size;
CGRect crop = { origin, size };
crop.origin.x = crop.origin.x / self.cropScrollView.zoomScale;
crop.origin.y = crop.origin.y / self.cropScrollView.zoomScale;
crop.size.width = crop.size.width / self.cropScrollView.zoomScale;
crop.size.height = crop.size.height / self.cropScrollView.zoomScale;
photo = [photo croppedImage:crop];
Kinda late to the game but I think this may be what you are looking for: https://github.com/gekitz/GKImagePicker
Here is a solution for manual cropping by Ming Yang.
https://github.com/myang-git/iOS-Image-Crop-View
It offers a rectangular frame, which the user can slide or drag to fit the required portion of the image in the rectangle. Please note that this solution does the reverse of the question asked - lets the rectangle size vary, but eventually brings the desired result.
It is coded in Objective-C. You may have to either code it in Swift or simply build a bridging header to connect the Objective-C code with Swift code.
It's now later than late but may be useful for someone. This is the library I've used for swift (many thanks to Tim Oliver):
TOCropViewController
as described in README file in GitHub link above, by using this library you can get cropped images in user-defined rectangular and also in a circular mode, e.g. for updating profile image.
below is sample code from GitHub:
func presentCropViewController {
let image: UIImage = ... //Load an image
let cropViewController = CropViewController(image: image)
cropViewController.delegate = self
present(cropViewController, animated: true, completion: nil)
}
func cropViewController(_ cropViewController: CropViewController, didCropToImage image: UIImage, withRect cropRect: CGRect, angle: Int) {
// 'image' is the newly cropped version of the original image
}

iOS Resize and crop not squared images - high quality

I'm facing the following problem : I have several UIImage (not squared) and I need to resize and crop them. I have read almost every question on StackOverflow but the results that I get are not good, I mean the image produced has a poor quality(blurry).
This is the scenario :
1) Original images size : width 208 pixel - height variable (i.e. from 50 to 2500)
2) Result images : width 100 pixel - height max 200 pixel
That is what I've done so far to achieve this result :
..... // missing code
CGFloat height = (100*image.size.height)/image.size.width;
self.thumbnail=[image resizedImage:CGSizeMake(100,height)
interpolationQuality:kCGInterpolationHigh];
..... // missing code
The method that I use to resize the image can be found here , once the image is resized I crop it using the following code :
CGRect croppedRect;
croppedRect = CGRectMake(0, 0, self.thumbnail.size.width, 200);
CGImageRef tmp = CGImageCreateWithImageInRect([self.thumbnail CGImage],
croppedRect);
self.thumbnail = [UIImage imageWithCGImage:tmp];
CGImageRelease(tmp);
Long story short, the image is resized and cropped but the quality is really poor considering that the original image had a really good quality.
So the question is how to achieve this keeping an high quality of the image?
If you target iOS 4 and later you should use ImageIO to resize images.
http://www.cocoabyss.com/coding-practice/uiimage-scaling-using-imageio/

mouse handler in opencv for large images, wrong x,y coordinates?

i am using images that are 2048 x 500 and when I use cvShowImage, I only see half the image. This is not a big deal because the interesting part is on the top half of the image. Now, when I use the mouseHandler to get the x,y coordinates of my clicks, I noticed that the coordinate for y (the dimension that doesnt fit in the screen) is wrong.
It seems OpenCV think this is the whole image and recalibrates the coordinate system although we are only effectively showing half the image.
I would need to know how to do 2 things:
- display a resized image that would fit in the screen
get the proper coordinate.
Did anybody encounter similar problems?
Thanks!
Update: it seems the y coordinate is divided by 2 of what it is supposed to be
code:
EXPORT void click_rect(uchar * the_img, int size_x, int size_y, int * points)
{
CvSize size;
size.height = size_y ;
size.width = size_x;
IplImage * img;
img = cvCreateImageHeader(size, IPL_DEPTH_8U, 1);
img->imageData = (char *)the_img;
img->imageDataOrigin = img->imageData;
img1 = cvCreateImage(cvSize((int)((size.width)) , (int)((size.height)) ),IPL_DEPTH_8U, 1);
cvNamedWindow("mainWin",CV_WINDOW_AUTOSIZE);
cvMoveWindow("mainWin", 100, 100);
cvSetMouseCallback( "mainWin", mouseHandler_rect, NULL );
cvShowImage("mainWin", img1 );
//// wait for a key
cvWaitKey(0);
points[0] = x_1;
points[1] = x_2;
points[2] = y_1;
points[3] = y_2;
//// release the image
cvDestroyWindow("mainWin");
cvReleaseImage(&img1 );
cvReleaseImage(&img);
}
You should create a window with the CV_WINDOW_KEEPRATIO flag instead of the CV_WINDOW_AUTOSIZE flag. This temporarily fixes the problem with your y values being wrong.
I use OpenCV2.1 and visual studio C++ compiler. I fix this problem with another flag CV_WINDOW_NORMAL and work properly and returns correct coordinates, this flag enables you to resize the image window.
cvNamedWindow("Box Example", CV_WINDOW_NORMAL);
I am having the same problem with OpenCV 2.1 using it with Windows and mingw compiler. It took me forever to find out what was wrong. As you describe it, cvSetMouseCallback gets too large y coordinates. This is apparently due to the image and the cvNamedWindow it is shown in being bigger than my screen resolution; thus I cannot see the bottom of the image.
As a solution I resize the images to a fixed size, such that they fit on the screen (in this case with resolution 800x600, which can be any other values:
// g_input_image, g_output_image and g_resized_image are global IplImage* pointers.
int img_w = cvGetSize(g_input_image).width;
int img_h = cvGetSize(g_input_image).height;
// If the height/width ratio is greater than 6/8 resize height to 600.
if (img_h > (img_w*6)/8) {
g_resized_image = cvCreateImage(cvSize((img_w*600)/img_h, 600), 8, 3);
}
// else adjust width to 800.
else {
g_resized_image = cvCreateImage(cvSize(800, (img_h*800)/img_w), 8, 3);
}
cvResize(g_output_image, g_resized_image);
Not a perfect solution, but works for me...
Cheers,
Linus
How are you building the window? You are not passing CV_WINDOW_AUTOSIZE to cvNamedWindow(), are you?
Share some source, #Denis.

Resources