ARKit reference image loaded at different size than in Assets - ios

I am running the sample project for ARKit and I am trying to add my own image. There is a weird behaviour as the image is loaded at a different size than set in the asset inspector:
As can be seen, the image is (almost) an A5 size (14.8 x 19.7 cm), but it is loaded by ARKit at 14.8 x 11.1 cm. The metadata of the picture also has a height greater than the width: 3024 x 4032. The image is a jpg.
To be noted, there are no changes made in the code from the sample project from Apple which I am currently running.
Another note, the app detects the image successfully, but the detected area on the piece of paper is clipped to the dimensions printed in console.
A third note: I observed that the 11.1 cm dimension would be obtained if the image would be rotated 90 degrees (i.e. consider the height to be 14.8 cm, then the width would be 11.1 cm). This looks weird enough as nothing points to anything like that. I also double checked my image, but there seems nothing wrong with it.
The question is: is there anything I am missing or things I have to consider/do with my sample image? Is this a known issue or is there any workaround to make ARKit load my image with the expected dimensions?
Update
The source of the Apple's sample project: https://developer.apple.com/documentation/arkit/detecting_images_in_an_ar_experience
An extra side note: I only experienced this issue with a single image file. It was an image taken by an iPhone and converted from HEIC to jpg using an online converter (which I cannot find anymore). Anyway, the image file did not seem to have any anomalies.

Related

iOS ideal image resolution

I'm having a real hard time understanding this, but let's say I have an iOS app for both iPad and iPhone and I want to download an image from a server and display it in full screen.
I have read that the iPad pro has a resolution of 2732x2048 and if we want to display an image in fullscreen we would need to download the image with this size right? However, I also read that the image should never be over 300KB. I was not able to bring an image with this size under 2MB (I used JPEGmini for example to reduce size).
And I don't think that iPhone user would need to download such a huge image, so my question is: what resolution should my images be on the server and how can I manage to keep them in a rational file size. Also should I upload multiple images for different devices? If so, how many and at what resolutions?
Isn't the problem merely that you are holding incompatible beliefs? This is the belief that is giving you trouble:
I also read that the image should never be over 300KB.
Let go of it.
Clearly it is right to say that the image should be no larger than needed for display. But an image to be shown as a 3x scale image on the iPad pro needs to be 2732x2048. So that's that.
(You could, alternatively, use an image 2/3 of that size and show it as a 2x scale image. It wouldn't look quite as good as the 3x scale image, but it might be acceptable.)
On a smaller device, yes, you should scale down the image in code, so that you are not holding in memory an image larger than needed for display. But in this case, you need the large image for display.

iOS - One Vector Image for All iPhone Resolutions

I am trying to create one vector image for all screen sizes of iPhones. I created pdf file using Illustrator with size for iPhone 6 plus i.e. 1242*231 (231 is my required height of image) and Included it in image assets and changed scaling factor to Single Vector.
Now it is being displayed in iPhone 6 plus with no problem. But when i try the same Single Vector Image in iPhone 6 it is squeezed.
I found out on web that pdf image is converted to #2x and 1#x automatically. like if i have 300*300#3x it is converted to 200*200 and 100*100.
According to this then it is the right behavior because 1242/2 = 621 and it required 750 to display accurately.
But my question is that wasn't the vector image suppose to adjust it? Any other work around this problem?
Try to save it in svg format, this would be better solution.

supplying the right image size when not knowing what the size will be at runtime

I am displaying a grid of images (3rows x 3 columns) in collection view. Each image is a square and its width is determined to be 1/3 of collectionView's width. Collection view is pinned to left and right margin of the mainView.
I do not know what the image height and width will be at runtime, because of different screen sizes of various iPhones. For example each image will be 100x100 display pixels on 5S, but 130x130 on 6+. I was advised to supply images that exactly matches the size on screen. Bigger images often tend to become pixelate and too sharp when downsized. How does one tackle such problem?
The usual solution is to supply three versions, for single-, double-, and triple-resolution screens, and downsize in real time by redrawing with drawInRect into a graphics context when the image is first needed.
I do not know what the image height and width will be at runtime, because of different screen sizes of various iPhones. For example each image will be 100x100 display pixels on 5S, but 130x130 on 6+
Okay, so your first sentence is a lie. The second sentence proves that you do know what the size is to be on the different screen sizes. Clearly, if I tell you the name of a device, you can tell me what you think the image size should be. So, if you don't want to downscale a larger image at runtime because you don't like the resulting quality, simply supply actual images at the correct size and resolution for every device, and use the correct image on the actual device type you find yourself running on.
If your images are photos or raster type images created using a raster drawing tool, then somewhere you will have to scale the original to the sizes you want. You can either do this while running in iOS, or create sets up front using a tool which can give you better scaling results. Unfortunately, the only perfect image will be the original with everything else being a distortion of the truth.
For icons, the only accurate rendering solution is to use vector graphics. Tools like Adobe Illustrator will let you create images which you can scale to different sizes without losing clarity. Unfortunately this still leaves you generating images up front. You can script this generation using most tools and given you said your images were all square, then the total number needed is not huge. At most you need 3 for iPhone (4/5 are same width, 6 and 6+) and 2 for iPad (#1 for mini/ipad1 and #2 for retina).
Although iOS has no direct support I know of for vector image rendering, there are some 3rd party tools. http://www.paintcodeapp.com/ is an example which seems to let you import vector images or draw vector images and then generate image code to run in your app. This kind of tool would give you what you want as the images are now vector drawings drawn at the scale you choose at run time. $99 though.
There is also the SVGKit (https://github.com/SVGKit/SVGKit), but not sure how good/bad this is. It seems to let you simply load and render direct from SVG files. Might be worth trying.
So in summary, I think you either generate the relatively small subset up front using a tool you can control the output from, take the hit in iOS and let it scale the images or use a 3rd party vector to image rendering kit which would give you what you want.

Tilemap gets messed up when converted to hd

I have a floorplan that I need to turn into a tilemap. I'm using the program HD2x to convert my tilemap into an -hd tilemap. I tried it in different ways:
1)I converted the floorplan into a -hd .png with HD2x, and then put this into Tiled, and the saved it and converted the final .tmx file into -hd. I then put the -hd tmx and -hd png file into x-code.
2)I put the regular floorplan into tmx, and then converted this into -hd and converted the floorplan.png into -hd, then put these into x-code.
These aren't working.. either the tilemap is half the size it should be, or it's a QUARTER of the size it should be and the floorplan looks messed up.
Please help.
Original Comment
You might be using the program wrong. It doesn't make sense that a tool would take an SD image and make it HD. Most likely it is meant to take an HD image and cut its resolution in half for the SD version.
Answer
It seems like you are creating images that are half the size of the original, but you are expecting it to do the opposite. In general you wouldn't want to go from SD to HD by simply increasing the image's resolution because the quality would drop. Taking an image and simply doubling its size will not look good.
But quality aside it wouldn't make sense for someone to create an application that increases your resolution for you by simply doubling its size. If that specific application you are using has that as an option, you are likely not setting the right option. From the sounds of it the application is creating images half the size of the images you are feeding it. That is likely the reason why you are getting half or a quarter of the expected size.

Capturing image at higher resolution in iOS

Currently i am working on an image app where i am able to capture an image by using AVFoundation framework. But what i am looking is , to capture an image with certain resolution and DPI (may be 300 DPI or greater).
How to do this ?
There have been numerous posts on here about trying to do OCR on camera generated images. The problem is not that the resolution is too low, but that its too high. I cannot find a link right now, but there was a question a year or so ago where in the end, if the image size was reduced by a factor of four or so, the OCR engine worked better. If you examine the image in Preview, what you want is the number of pixels per character to be say 16x16 or 32x32, not 256x256. Frankly I don't know the exact number but I'm sure you can research this and find posts from actual framework users telling you the best size.
Here is a nice response on how to best scale a large image (with a link to code).

Resources