I want to use ▾ – BLACK DOWN-POINTING SMALL TRIANGLE in my app, but in reality, the triangle is misshaped and smaller than what is shown on the web browser. Is there a way to keep the size and the shape of this symbol(or unicode symbols in general) exactly the same in Xcode as what they are on the web browser?
Related
I am using the black and white circle in my project when I noticed that on ios chrome the white circle is larger than the black circle.
● - U+25CF
○ - U+25CB
In a desktop environment the circles appear to be the same size but have slightly different heights. The difference is not noticeable.
I am trying to make these circles the same size on ios chrome.
I feel like I have eliminated any variables and that the browser is responsible for the different sizes of these circles.
photo
Actually, as far as unicode is concerned, all characters are font dependant. When a certain character is not available, it is picked from a fallback font.
If you had not configured a custom font, or if these chars are not available in the font you picked then the different sized circles are in the default font used by chrome/ios.
So, you have two ways to go: either find a font that have the characters drawn in a way that suits you, and force that, or give-up using unicode characters for these glyphs and use inline images instead.
You could make use of SVG drawings which can be encoded within the HTML markup itself, that will ensure a consistent look.
I am displaying a grid of images (3rows x 3 columns) in collection view. Each image is a square and its width is determined to be 1/3 of collectionView's width. Collection view is pinned to left and right margin of the mainView.
I do not know what the image height and width will be at runtime, because of different screen sizes of various iPhones. For example each image will be 100x100 display pixels on 5S, but 130x130 on 6+. I was advised to supply images that exactly matches the size on screen. Bigger images often tend to become pixelate and too sharp when downsized. How does one tackle such problem?
The usual solution is to supply three versions, for single-, double-, and triple-resolution screens, and downsize in real time by redrawing with drawInRect into a graphics context when the image is first needed.
I do not know what the image height and width will be at runtime, because of different screen sizes of various iPhones. For example each image will be 100x100 display pixels on 5S, but 130x130 on 6+
Okay, so your first sentence is a lie. The second sentence proves that you do know what the size is to be on the different screen sizes. Clearly, if I tell you the name of a device, you can tell me what you think the image size should be. So, if you don't want to downscale a larger image at runtime because you don't like the resulting quality, simply supply actual images at the correct size and resolution for every device, and use the correct image on the actual device type you find yourself running on.
If your images are photos or raster type images created using a raster drawing tool, then somewhere you will have to scale the original to the sizes you want. You can either do this while running in iOS, or create sets up front using a tool which can give you better scaling results. Unfortunately, the only perfect image will be the original with everything else being a distortion of the truth.
For icons, the only accurate rendering solution is to use vector graphics. Tools like Adobe Illustrator will let you create images which you can scale to different sizes without losing clarity. Unfortunately this still leaves you generating images up front. You can script this generation using most tools and given you said your images were all square, then the total number needed is not huge. At most you need 3 for iPhone (4/5 are same width, 6 and 6+) and 2 for iPad (#1 for mini/ipad1 and #2 for retina).
Although iOS has no direct support I know of for vector image rendering, there are some 3rd party tools. http://www.paintcodeapp.com/ is an example which seems to let you import vector images or draw vector images and then generate image code to run in your app. This kind of tool would give you what you want as the images are now vector drawings drawn at the scale you choose at run time. $99 though.
There is also the SVGKit (https://github.com/SVGKit/SVGKit), but not sure how good/bad this is. It seems to let you simply load and render direct from SVG files. Might be worth trying.
So in summary, I think you either generate the relatively small subset up front using a tool you can control the output from, take the hit in iOS and let it scale the images or use a 3rd party vector to image rendering kit which would give you what you want.
How do I match the font pixel size given to me by my designer in PhotoShop to the correct font size in Xcode Interface builder.
For example, my designer is using Helvetica Neueu Regular 32px Font in his design.
I've used a few points to pixel translation sites, but it doesn't seem exact.
I have attempted to follow the answer from this question, but to no avail:
https://stackoverflow.com/a/6107836/1014164
You will never have perfect results when visually comparing a Photoshop comp to a real program. In fact, it's not un-common for a text layout to be different between different computers because version and operating system differences (as well as monitor layouts) cause the text to reflow every time it's edited.
Unless things are very much different in other versions of Photoshop, your designer hasn't specified 32px because Photoshop doesn't lay text out in pixels - it works in points/picas. The exact text rendering is also dependent on the document's resolution (which is different between print and screen).
The best you can do is get the text to look roughly proportional to the designer's intent. In modern iOS, most apps will use the user's customized font settings anyway.
Just working on an iPad app with a significant amount of text and I was wondering if use UITextFields and UILabels was more of a performance/memory hit than simply using a UIImageView with a PNG of the text.
In some cases the text animates, but it most cases its static.
Thank you.
Update:
Taking Marc's advice I did a little digging with a new XCode project. Here are the experiement details:
Test 1:
Brand new Single View Template XCode Project for iPad (not using storyboards)
One Image view centered with a 678x828 image of the paragraph text (36k image)
The custom fonts and stylings were baked into the image.
The result:
756kb of Live Bytes & 842kb of Live Bytes w/ Retina PNG
Test 2:
added a second Image View with a different paragraph (699x749 82k)
The result:
767kb of Live Bytes & 854kb of Live Bytes w/ Retina PNG
Test 3:
Took the same copy and added 4 UILabels
Styled the fonts to match the fonts of the baked in PNG
Embedded custom fonts
The result:
965kb of Live Bytes
Test 4:
added 4 more custom text labels with the same text & style from the second image view
The result:
1024kb of Live Bytes
From this angle it appears that using PNGs with baked in copy and styling has a lower memory foot print and scales better. Obviously this is a very quick & dirty experiment.
Sub-pixel font rendering like ClearType dramatically improves font display resolution and improves screen readability. How would I program sub-pixel rendering of a font (in general), and how can this be achieved on the iPad (C, C++, or Objective-C on an iOS device)? Fonts are quite blurry at certain sizes on the iPad, and I know that the iPad's display would work well with this technique...
So, how would I develop a font rendering engine for the iPad (e.g. How do I even access sub-pixels? Do I use OpenGL? Is there an existing open-source font rendering engine written in C, C++, or Objective-C for Mac OS X?)?
Each pixel on the iPad is a rectangle of red, green, and blue components, so one might think that sub-pixel font rendering would be a good choice for the device.
But consider that this device can be easily changed from portrait to landscape modes, and applications are expected to respond to that change. This would imply that your sub-pixel font mechanism would also have to respond to that change, and you would need two separate sub-pixel descriptions for each font.
Now throw in the fact that developers expect to be able to write universal applications that run on the pad and the phones in a single purchase/download. But look at the different pixel configurations on the various generations of the phones in the image below. Each of those, recall, would need to describe fonts differently in portrait mode and landscape mode. Now you have an explosion of font descriptions.
Now recall that we're speaking of portable devices where the most precious resource is the battery, and sub-pixel font rendering is more computationally intensive.
I'm guessing that this is not too different from the thought process that led Apple to eschew sub-pixel font rendering in favor of hoping that display technology increases pixel density to the point where it is no longer necessary (the retina display on the iPhone 4 being the first step in that direction.)
I would wager that in some future edition of the iPad, we'll have a display with similar density, and it won't matter as much. Any effort that you invest trying to invent a sub-pixel font rendering mechanism for your iPad application will immediately become obviated at that point, so I would recommend not going down that path.