How liquify / warp works photoshop
This video shows what I mean:
https://youtu.be/zkfZzbRAuJ0
I want to create a liquar / warp effect of photoashop in my application
graphics app, but I have no idea where to start.
I want the mirror effect that is created by doubling the pixels many times, the pixels overlap and disappear and reappear are discovered.
How does photoshop to keep the pixels overlapping and then rebuild them when they are overlapping.
Related
I have a photoshop file with 8 concentric 'rings' (although some aren't rings and are more irregular), with the largest at the 'back' and decreasing in size up to the 8th one being very small in the centre.
The 'rings' are drawn in such a way as that the smaller ones are 'internal' to its 'outer' or next larger ring. Each 'ring' has transparency on its outside, but also on its inside (where the smaller rings would 'sit').
I need to support all iOS devices (Universal App).
The largest image has a default size of 2048x2048 pixels, but every one of the 8 layers has a common 'centre' point around which they need to rotate, and around which they need to be fixed.
So, basically, all 8 have to be layered, one on top of the other, such that their centres are all perfectly aligned.
BUT the size of the artwork is larger than any iOS device, and the auto-layout has to allow for every device size and orientation, with the largest (rear) layer having an 8 point inset from the screen edges.
For those that can't picture this, here is a crude representation, where the dark background is 'transparent' and represents the smaller of the width or height of the iOS device (depending on orientation):
Note: The placement of where each smaller UIImageView is precise. They all share a common centre (the centre of the screen) but each ring sits 'inside' of the larger ring behind it. i.e. the centre of the green, hot pink and baby pink circles are empty / transparent, and no matter what size screen or orientations, they have to nest together perfectly, as they do in the photoshop art assets.
I've spent hours in auto-layout trying to sort this out, and when I've got it working on one device and both orientations, it's not working on any others.
No code to show because I'm trying to do this in IB so I can preview on all devices.
Is this even possible using IB / Auto-Layout or do I have to switch to manually working out the scales by which to resize their UIImageView based on screen width / height at runtime and the relationship between each 'ring'?
Edit:
And unless I'm doing it wrong, embedding each UIImageView into a transparent UIView in order to use the UIView to fake 'insets', this doesn't work because those numbers are hard coded, such that when it's perfect on a 12.9" iPad Pro, on an iPhone SE each 'inset' UIImageView is much more compressed and doesn't sit 'inside' it's next larger ring, but is like a tiny letter O with lots of surrounding blank space, because those 'insets' don't scale. What is 100pts on an iPad is a tiny amount of space, but 100pts on an iPhone SE is a 1/3 of the screen.
You can draw circles using CAShapeLayer and UIBezierPath. Since you are trying to fit this in a square, I'd define container size to be either the width or height of the parent container depending on what's smaller, this will allow for rotation and different screen sizes. As for the center, you can always find it by getting center coordinates of your square container (container.bounds.size.width / 2). To rotate your layers/sublayers you can use this answer: https://stackoverflow.com/a/3929703/4577610
I am displaying a grid of images (3rows x 3 columns) in collection view. Each image is a square and its width is determined to be 1/3 of collectionView's width. Collection view is pinned to left and right margin of the mainView.
I do not know what the image height and width will be at runtime, because of different screen sizes of various iPhones. For example each image will be 100x100 display pixels on 5S, but 130x130 on 6+. I was advised to supply images that exactly matches the size on screen. Bigger images often tend to become pixelate and too sharp when downsized. How does one tackle such problem?
The usual solution is to supply three versions, for single-, double-, and triple-resolution screens, and downsize in real time by redrawing with drawInRect into a graphics context when the image is first needed.
I do not know what the image height and width will be at runtime, because of different screen sizes of various iPhones. For example each image will be 100x100 display pixels on 5S, but 130x130 on 6+
Okay, so your first sentence is a lie. The second sentence proves that you do know what the size is to be on the different screen sizes. Clearly, if I tell you the name of a device, you can tell me what you think the image size should be. So, if you don't want to downscale a larger image at runtime because you don't like the resulting quality, simply supply actual images at the correct size and resolution for every device, and use the correct image on the actual device type you find yourself running on.
If your images are photos or raster type images created using a raster drawing tool, then somewhere you will have to scale the original to the sizes you want. You can either do this while running in iOS, or create sets up front using a tool which can give you better scaling results. Unfortunately, the only perfect image will be the original with everything else being a distortion of the truth.
For icons, the only accurate rendering solution is to use vector graphics. Tools like Adobe Illustrator will let you create images which you can scale to different sizes without losing clarity. Unfortunately this still leaves you generating images up front. You can script this generation using most tools and given you said your images were all square, then the total number needed is not huge. At most you need 3 for iPhone (4/5 are same width, 6 and 6+) and 2 for iPad (#1 for mini/ipad1 and #2 for retina).
Although iOS has no direct support I know of for vector image rendering, there are some 3rd party tools. http://www.paintcodeapp.com/ is an example which seems to let you import vector images or draw vector images and then generate image code to run in your app. This kind of tool would give you what you want as the images are now vector drawings drawn at the scale you choose at run time. $99 though.
There is also the SVGKit (https://github.com/SVGKit/SVGKit), but not sure how good/bad this is. It seems to let you simply load and render direct from SVG files. Might be worth trying.
So in summary, I think you either generate the relatively small subset up front using a tool you can control the output from, take the hit in iOS and let it scale the images or use a 3rd party vector to image rendering kit which would give you what you want.
I have 2 relatively small pngs that will be images inside UIButtons.
Once our app is finished, we might want to resize the buttons and make them smaller.
Now, we can easily do this by resizing the button frame; the system automatically re-sizes the images smaller.
Would the system's autoresize cause the image to look ugly after shrinking the image? (i.e., would it clip pixels and make it look less smooth than if I were to shrink it in a photo editor myself?)
Or would it better to make the image the sizes they are intended to be?
It is always best to make the images of correct size from the beginning. All resize-functions will have negative impact on the end result. If you scale it up to a larger image it will be a big different, but even if you scale it down to a smaller it is usually creating visible noise in the image. Let's say that you have a line of one pixel in your image. scale it down to 90% of the original size, this line will just use 90% of a pixel wide and other parts of the images will influence the colors of the same pixels.
I'm trying to avoid blended layers on iOS to improve performance. However, I notice that the resizable image I'm using for the backgroundView of my UITableViewCell is being marked as a blended layer:
In fact, using any resizable image--even a JPEG with no transparency--caused layer blending, as seen in this screenshot where first a PNG and then a JPEG is used as a resizable image in a UIImageView. The only resizable image that didn't require a blended layer was a 1x1 pixel image, seen at the bottom:
Is there any way to avoid this? Core Animation profiling is imprecise art (atleast to me), but I think it's a main contributor to dropping to around 25 FPS when scrolling my table view.
Edit2: Upon more experimentation, I found that if I only vertically or horizontally stretched the images (either a PNG or JPG), they weren't marked as blended layers. However, upon even more experimentatio I think this may be because the images only stretched in one dimension are smaller. My image is not being treated as blended at 100x100, but it is at 150x100.
I created a very wide image and only stretched it vertically. This did not require blended layers, and achieves the correct effect for my table view cell. This isn't ideal, but because of the small height it's still only 236 bytes for the retina image.
I am able to use PNGs that have drop shadows but the effect when displayed on the BlackBerry looks like it collapses the transparent channel down from its original smooth gradient to only several transparent values giving it a choppy look.
The same issue is encountered by drawing on the UI using BlackBerry fields or the graphics.drawBitmap method. Anyone want to share hints for getting great looking transparent effects on the BlackBerry?
Dither your images or pre-composite them. When loading an image on a BlackBerry, you get at most 4 bits of alpha data, which allows 4 bits each for RGB. So, if you want to dither your transparent images, go for RGB4444. If you don't dither them, that's what causes 8-bit alpha to just be mapped to the nearest 4-bit value.
If you include no alpha data (i.e., precomposite), you can get RGB565, which will have a better image quality overall, but you will have to deal with static positioning for your dropshadows.