When I create png in Photoshop with parameters 30x30pxl, clear background with white lines it looks like poor resolution. link to snapshot of presenting tabbaritem: http://yadi.sk/d/9zrBjrjxBFyva
I want smooth lines in this picture. What way I can get it?
It looks like you're using a non-retina image on a retina device. Create a 60x60 version and name the same as the 30x30 version with the suffix #2x(for example, clock#2x.png). Add this to your project.
Edit: The graphics you're producing in Photoshop are pixelated, and in the case of the #2x image, very blurry. Since the source images are poor quality, there's no way iOS can magically make them smooth.
Use the shape tool in Photoshop to create crisp, sharp shape paths. Make sure to align the shapes to pixel boundaries for extra crispness.
Related
I have designed a lock icon in Sketch to add to a button in my application:
I exported it both in pdf and png (2x, 3x) to add to Xcode assets. Problem is when I run the app on iPhone (SE), heavy pixelation can be seen around the edges of the icon:
I've tried both pdf and png formats, but result stays the same. Am I missing any settings that need to be applied for image to look sharp on screen?
Bigger is not necessarily better for a UIButton's image. Try to export your icon in more or less the same size with which it will be used. (Note that this also frees up memory in comparison to a way bigger image).
To adapt to different screens' resolutions, you should provide up to three images (#1x, #2x, #3x). You should read this excellent Apple's documentation on Image Size and Resolution. It explains perfectly how big should the images you provide in Xcode be.
They also have a good explanation on which format you should use according to the purpose of the image.
EDIT:
You can also use vector ressources (.pdf files for instance) that will render perfectly for any resolution. You can read this article about how to implement it in your Xcode project (If you do so, please be careful in the attributes of the asset to check Preserve Vector Data and the Scales to Single Scale, otherwise it may not render well).
It will happen if image sizes are not correct
check the size of images. 1x,2x and 3x sizes are should be as followed
1x = 24x24 px
2x = 48x48 px
3x = 72x72 px
If images size are too big than ImageView then pixelate will happen
Hope this will help you
I have a series of images that I would look to loop through using iOS's [UIView startAnimating]. My trouble is that, when I exported the images, they all came standard in a 240x160 size, although only 50x50 contains the actual image, the rest being transparent parts that are just taking up space.
When I set the frame of the image automatically using image.size.width and image.size.height, iOS takes into images' original size of 240x160, so I am unable to get a frame that conforms to the actual parts of the image. I was wondering if there is a way using Illustrator or Photoshop, or any other graphics editing software for me to export the images based on their natural dimensions, and not a fixed dimension. Thanks!
I am a fan of vector graphics and thinks everything in the world should be vector ;-) so here is what you do in illustrator: file - document setup - edit artboards. Then click on the image, and the artboard should adjust to the exact size. You can of course have multiple artboards, or simply operate with one artboard and however-many images.
Is it the same if I use a big image i.e. a 40x40 image in a 20x20 place holder or a #2x image for retina?
I mean, I have two alternatives:
- use a 20x20 image.png and 40x40 image#2x.png
- use a 40x40 image.png
Is it the same?
Thanks.
Using only retina images and leaving the down-scaling on non-retina devices up to the system is possible but not recommendable in all cases.
It really depends on the contents of your graphics. If, for example you are using vector based graphics as your source (sharp lines etc.), then offering only the retina images will result into washed, blurry images on non-retina displays.
Again, it is possible and entirely fine if your content still looks good enough.
(i can't post images so i posted links instead)
i'm working on a 2D platform game using pixelperfect.
The problem is about the png images used in-game. On transparency, there are some blur.
like this :
http://i.stack.imgur.com/lBX3A.jpg
If i open the texture with TheGimp, this is what i get :
http://i.stack.imgur.com/pOeF4.jpg
this is a sample of my map (zoom 1600x).
As you can see there is no blur around the black. (the grey squares means transparency).
Tests i did :
save without compression and re-opened it = no blur.
to be sure, i added a white background in gimp (it's easier to see the dark blur on white) :
(http://) i.stack.imgur.com/jfhWv.jpg
of course, i removed the white background because i wanted it transparency.
last information : there is blur on every transparency png images, even on my spritesheet character. When i animate it, i can see the blur from others frames.
After my tests, i concluded that gimp isn't the problem.
Can you help me ? Thx for reading.
xna4, c#2010 express edition, gimp2.611.
Sorry about my english ^^
This happens because of "texture filtering", which xna does by default.
You probaly can disable this.
found something: https://gamedev.stackexchange.com/questions/6820/how-do-i-disable-texture-filtering-for-sprite-scaling-in-xna-4-0
Normally I'm using CALayer shadowRadius, but now I also need to use UIImage and apply shaped shadows to it based on the content in the image.
For example when I have a layer with text in it and I set a shadow, it works automatically on the text and not just on the rectangle of the layer.
In Photoshop this is known as "layer style" and it automatically works based on the shape of the image content.
I am afraid that I need to implement some Harvard-Stanford-MIT-NASA kind of hardcore logic to apply a shadow on a "shaped image", i.e. an image of an round icon where the areas around the icon are fully transparent.
I'm able to manipulate images on a per-pixel level as I'm doing this already to draw charts, so if there was an open-sourced implementation of some fantastic algorithms this would be fantastic. And if not: How does this basically work? My guess is I would "just" try to blur a grayscaled version of my image somehow and then overlay it with the non-blurred version.
My guess is I would "just" try to blur a grayscaled version of my image somehow and then overlay it with the non-blurred version.
That's pretty much it, actually. Except instead of blurring a greyscaled version of the image, blur a solid-colored version of the image (i.e. keep the alpha channel, but make all pixels black). Although CALayer's shadowing should do this already for you.
If your images are already composited onto a background (i.e. without real transparency), you have a harder problem as you first need to "remove" the background before you can have the shape of the object in order to generate the shadow.