Resizable Window Resolution - jquery-ui

I have an img#myimage whose src is 1000px x 1000px. Then I resize it with .css() to 100 x 100 and apply resizable() to it:
$('#myimage').css('width' : '100px', 'height' : '100px'})
.resizable();
What happens now when I expand it with the resize handles back to 1000 x 1000? Do I still have a million pixels of resolution, or did I lose something when I reduced the width and height with css()?
It seems that I still have the million pixels, but I'd like to get someone else's thoughts on what's really happening here.

jQuery does not actually resize the image, it just scales it. So when you scale it back up to 1000x1000 it will still be the full resolution (because the image hasn't actually changed).
If you wanted to actually resize and save the image you would need server-side code such as php or asp to process the image and send it back to the browser.

Related

PDF vector images in iOS. Why does having a smaller image result in jagged edges?

I want to use pdf vector images in my app, I don't totally understand how it works though. I understand that a PDF file can be resized to any size and it will retain quality. I have a very large PDF image (a cartoon/sticker for a chat app) and it looks perfectly smooth at a medium size on screen. If I start to go smaller though, say thumbnail size the black outline starts to look jagged. Why does this happen? I thought the images could be resized without quality loss. Any help would be appreciated.
Thanks
I had a similar issue when programatically changing the UIImageView's centre.
The result of this can lead to pixel misalignment of your view. I.e. the x or y of the frame's origin (or width or height of the frame's size) may lie on a non integral value, such as x = 10.5, where it will display correctly if x = 10.
Rendering views positioned a fraction into a full pixel will result with jagged lines, I think its related to aliasing.
Therefore wrap the CGRect of the frame with CGRectIntegral() to convert your frame's origin and size values to integers.
Example (Swift):
imageView?.frame = CGRectIntegral(CGRectMake(10, 10, 100, 100))
See the Apple documentation https://developer.apple.com/library/mac/documentation/GraphicsImaging/Reference/CGGeometry/#//apple_ref/c/func/CGRectIntegral

Webmatrix - webimage crop then resize

I want to make sure that all the images that users upload to my website are the same size.
The size that I want to achieve is 620px x 405px
Because I don't want any white space in my images, and I want to keep the aspect ratio, I'm guessing I'll need to crop first, before I resize?
So far I've got the following code:
photo.Resize(width: 620, height:405, preserveAspectRatio: false, preventEnlarge: true);
But obviously this isn't giving me the desired affect.
I have seen other articles online where they do some formula, but I can't get any to work for me.
Suppose someone uploads an image which is 1020 wide by 405? Which bit do you want to keep? The left hand end? Right hand end? The middle bit? Then the next image is 3000 x 3000. Now which bit do you want to crop? Perhaps this one needs resizing before cropping otherwise you might only get a window.
My recommendation is to allow the user to specify the crop area, and then you resize the resulting cropped image. There are a number of jQuery plugins that enable client side cropping. I've written about jCrop (http://www.mikesdotnetting.com/Article/161/WebMatrix-Testing-the-WebImage-Helper-With-JCrop) but I have also received some feedback that it is not reliable in some versions of IE (although I haven't tested that myself).

When iOS shrinks an image, does it clip/pixelate it?

I have 2 relatively small pngs that will be images inside UIButtons.
Once our app is finished, we might want to resize the buttons and make them smaller.
Now, we can easily do this by resizing the button frame; the system automatically re-sizes the images smaller.
Would the system's autoresize cause the image to look ugly after shrinking the image? (i.e., would it clip pixels and make it look less smooth than if I were to shrink it in a photo editor myself?)
Or would it better to make the image the sizes they are intended to be?
It is always best to make the images of correct size from the beginning. All resize-functions will have negative impact on the end result. If you scale it up to a larger image it will be a big different, but even if you scale it down to a smaller it is usually creating visible noise in the image. Let's say that you have a line of one pixel in your image. scale it down to 90% of the original size, this line will just use 90% of a pixel wide and other parts of the images will influence the colors of the same pixels.

TImageviewer maximum size

Does anyone know what is the maximum width and height of a Bitmap in a FireMonkey TImageViewer?
I am drawing vector graphics in a TImageViewer. I am only able to zoom up to a certain value then I get a memory exceprion.
I've tested this on two seperate computers and it would appear that the actual size limit to a Bitmap in Firemonkey is 8000x8000 px meaning both Width and Height respectively cap at 8000 px size.
This is what I observed, if anyone gets a different result please let me know.
How you plan to get around that is up to you, I would suggest dissecting the source image into multiple parts so that neither part excedes the limit and then assign each part to a different Bitmap component(such as TImageViewer) and then make it all come together as a whole.

Can iOS Retina images have odd dimensions i.e. 28 x 15 px?

We are currently working with a design who is supplying Retina images to us with odd dimensions i.e. 28 x 15 px which I believe is incorrect as when you divide it you get an odd number like 14 x 7.5 px.
This is a rule I have always worked on but the designer is not getting the point and I thought I should double check what the exact rules are.
I've had add look on the web but cannot seem to find any references on this so it would be great to hear what everyone thinks on this matter.
Thanks
Yes you can, but NOT recommend.
For example, if you have an #2x image with 28 x 15px , your normal image will be 14 x 8px.
If you look close into the normal image, the pixels are not aligned well.
It is always recommended to use even number of pixel in dimension.
The #2x image needs to be exactly twice the width and twice the height of the standard image, or the automatic loading of it won't happen - your app will load and pixel-double the non-Retina image.
The standard image file will as a matter of course be a whole number of pixels wide and high, so you'll need the #2x to be even in its dimensions.
Tell your designer to catch on ;)
It's not possible because in Xcode you design your application with classic resolution pictures, and you can't use a float for width or height. So, you will have a one pixel gap difference between your classic and retina design. Maybe the easiest way to solve your problem is to add a transparent line of pixels in your high resolution picture.

Resources