I have consistently noticed the Nexus 7 will not render a webfont if it's containing element has been sized using percentages instead pixels.
As you can imagine, this is very bad news for responsive design. The odd thing is that on my android phone, using the same browser(Chrome), the webfonts appear fine.
Has anyone else ran into this problem?
***UPDATE***
I wanted to test this theory, so I created a testing page with three divs - one sized using pixels, one sized using percentages and the last sized using ems. The webfonts appeared in all three divs without issue. On my site however, when I toggle a div's size from percentage to pixels it is still causing this issue. It seems this alone is not the problem. I'm going to continue testing with more nested divs, but if any of you want to experiment yourselves, here's the link:
https://dl.dropbox.com/u/46736730/nexus-test/nexus-test.html
ANY ideas/insight would be really helpful!
Related
I've been working on a React Native app for the past few months, and something has always eluded me that I'm just now trying to get to the bottom of. I'm trying to standardize the font sizes within my app (for body, headers, etc.) and am struggling to understand where exactly React Native gets its default font size from. I've been testing in an iPhone 8 simulator running iOS 11, and through experimentation have come to the conclusion the default font displayed is 14 logical pixels (as in a brand-new element with no styling). However, checking the iOS style guidelines it would seem the default body text on iOS is intended to be 17pt. Although I don't a ton about device scaling, I've tried crunching the numbers in every online converter I could find, and could not come up with how 17pt may come close to 14 logical pixels (assuming RN uses the body font size by default). The only way I was able to roughly translate the two was if the DPI was around 90, but the DPI for iPhones is much, much higher. So might anyone be able to offer insight as to how default fonts are selected in React Native? Even hunting through the source code I couldn't find much. I'd like to be able to calculate the base size myself so I can scale the other font sizes according. Thanks in advance!
I suggest you to look at react-native-element's helper method normalizeText. It calculates the pixel ratio and set the texts fontSize accordingly.
Also you should also take in consideration of Accessibility Text Size Option in iOS. This option will be affecting all your app.
currently working on a side project, but I'm stuck on one big part.
The goal is that the user can take a screenshot from a different popular app that contains 6 images/icons. I want it so when the user goes into my app they can upload that screenshot and I can detect the 6 images and place them into a collection view.
The issue is detecting the type of 6 images in the screen shot, I thought about using an OCR like Tesseract but I'm not sure if that would work because there's zero text in the screenshot, only the 6 images. Something that might help is that in that app there all only 50 kind of images. Would create some sort of database of images help? But how would I compare them?
I apologise if this doesn't make sense I just don't know how to word it. Any help would be great.
Assuming you want to be able to do this across multiple types of devices, a computer vision library like OpenCV might be the way to go.
If your users always run the app on the same device (always on an iPhone 5, say) then the icons might always land in exactly the same spot, and you could simply slice the screenshot up, extract the component images, and do a byte-wise compare on the sub-images. However, you've got iPhone 4, iPhone 5, iPhone 6, 6+ screen-sizes, iPad, iPad retina, iPad pro (small and large) to deal with, and possibly portrait and landscape orientations. Presumably the 6 images will land at different spots on the screens of all those different devices, and you'll have different image resolutions to deal with as well. With OpenCV you should be able to find the bounding rects for the images by "looking at" the screen-shots rather than building a complex set of rules.
Take a look at the OpenCV example code for matching SIFT features (the python version here, but you can find examples in other languages as well). It demonstrates a simpler version of what you want to do.
When adding a PDF resource to an asset catalogue, selecting "vectors", and configuring the slicing, this slicing does not behave as expected. The image gets stretched in Interface Builder and on the device weird results can be seen. However I don't seem to find any confirmation about the fact that slicing doesn't work on vectorial assets.
Can anyone shed a light on this?
Xcode 6, iOS8.
Thanks!
I have been trying to get this to work today. Seems that Xcode will slice a 2x PDF image as if it were a 1x. That means if you set an inset of 15points, Xcode will set the inset at 15 pixels, not 15 points, so it ends up being half of what you wanted. Can't seem to find any way around this. Have to do it in code instead of Interface Builder. Thanks, Xcode.
I think the conclusion (at this point in time) is:
It's broken
I am new to iOS programming and programming in general so this will probably be a fairly easy question to answer for someone who is experienced.
I am making a game using sprite kit and I need to include different background image sizes for the different retina display sizes on the iPhone 4 and 5. I am using a graphics package to create the images in .png format then adding them into the project, the issue I have is that, if I make a 640x1136 size image, it works on the 5, and if I use a 640x960, it works fine on the 4 but leaves blank space around the edge on the 5. (I am running it on the simulator)
If I include two identical images with different names, one for each device, how can I load the right one in? Do I only need high resolution image and can use some code to change how it loads the image in, so that it covers the whole screen without pixelation or loss of quality on both devices?
Any help or advice is appreciated. I apologise if this is a simple question, thanks for your time.
Note:
I found out plenty on the internet about using the #2x suffix for high resolution images, but that's not what I'm looking for. I know how to code for different resolutions, just not two different screen sizes with the same resolution, if that makes any sense.
If you're on iOS 7 SDK which you most likely are, make use of the .xcassets catalogue. It has options for different screen sizes, put the different versions of your image there. And then load image in code.
Using UIImageView I am trying to animate an image sequence of 300 PNG files, it works fine on the simulator however on actual device this doesn't work at all and what i see is a blank sereen.
If I am right then this is possibly due to the large number of images (300 approx.) that I am trying to load and animate or is there any other issue. Please advise?
It will depend on how big the .png files are, which test device you're using etc. When you test on the simulator it has access to all the computers RAM, which is considerably more than an iPhone does. I'll try to improve my answer if you post up some code. It sounds like there is a better way to do it.
Have you tried using a small subsection of the frames (say 10) and see if that works? You could also profile your app and check whats going on that way?