Tilemap gets messed up when converted to hd - ios

I have a floorplan that I need to turn into a tilemap. I'm using the program HD2x to convert my tilemap into an -hd tilemap. I tried it in different ways:
1)I converted the floorplan into a -hd .png with HD2x, and then put this into Tiled, and the saved it and converted the final .tmx file into -hd. I then put the -hd tmx and -hd png file into x-code.
2)I put the regular floorplan into tmx, and then converted this into -hd and converted the floorplan.png into -hd, then put these into x-code.
These aren't working.. either the tilemap is half the size it should be, or it's a QUARTER of the size it should be and the floorplan looks messed up.
Please help.

Original Comment
You might be using the program wrong. It doesn't make sense that a tool would take an SD image and make it HD. Most likely it is meant to take an HD image and cut its resolution in half for the SD version.
Answer
It seems like you are creating images that are half the size of the original, but you are expecting it to do the opposite. In general you wouldn't want to go from SD to HD by simply increasing the image's resolution because the quality would drop. Taking an image and simply doubling its size will not look good.
But quality aside it wouldn't make sense for someone to create an application that increases your resolution for you by simply doubling its size. If that specific application you are using has that as an option, you are likely not setting the right option. From the sounds of it the application is creating images half the size of the images you are feeding it. That is likely the reason why you are getting half or a quarter of the expected size.

Related

Gimp exports: Why is the image size increased?

I'm trying to use a two step process of employing Gimp to delete sections of images and then using Inkscape for the remainder of the image work.
Unfortunately, I'm seeing a resolution change when doing the export to PNG from Gimp.
The exported image is around 50% larger than the original, which impacts the quality.
Is there a way to keep the resolution constant when exporting the file?
Hopefully I'm just forgetting something, since I've spent some time away from image work.
Please let me know if any additional info is required.
In the interim, I'll try another tool to do the Gimp step.
THANKS!
Edit: Updated size to resolution.
For a bitmap/raster image, resolution (for Gimp: "Image print resolution", see Image>Print size) is indicative. The only thing that counts is the size in pixels.
If you have image window set to "Dot for Dot" (Edit>Preferences>Image Windows->General>"Use dot for dot" or View>Dot for dot) the image is displayed with the definition of your screen (around 100PPI fore regular screens, 20OPPI for high def ones (Retina, etc...).
When you create the image (File>New...), you can specify a print definition and a print size, and Gimp will compute the required size in pixels.

What is the correct procedure to create images/sprite for iPhone/iPad apps/games?

I am new in developing games with Xcode for iPhone/iPad. Thus I need some help with the correct procedure to create images/sprites for the game.
By now I have created my sprites with Illustrator and I exported them as PDF files. In Xcode I created this single scale asset and put the PDF in it.
If I understand the documentation correctly, Xcode automatically generates image files at #1x, #2x and #3x from the PDF. Does it generate PNG files?
Then I create a SKSpriteNode and set the size like this: abc.size = CGSize(width: 123, height: 123). Instead of 123, I fill in the width and height corresponding to the frame/image size I set up in Illustrator. Is this correct? I think so, because this is #1x version?!
But if I need the same image for iPhone and iPad in different sizes, i can't simply resize it, because the #1x image version isn't a vector anymore and bounded to the frame size I chose in Illustrator? What to do then? Do I have to resize my image in Illustrator and export it in a different size?
What is the correct procedure? Do I have to draw a sketch with pencil at the very beginning on a paper and the measure it with ruler? Then I would go to illustrator and set the frame width height at that what I measured manually?
So many questions. I am very confused with this images sizes, resolutions and #1x, #2x and #3x version. I am not sure why I should use vector files, if I still can't resize the images in the developing process as I would like to, because they are still bound to the frame size I chose in Illustrator.
Is there no possibility to set ratios between all my images and then just use the vector PDF file? How should I setup my Illustrator?
I hope somebody can bring some light into the dark. Thank you.
Your pdf should be sized in points #1x (not pixels). The points should be the same physical size on the phone and the ipad, but if you want them smaller on the phone you need a second set of images; the asset catalog lets you swap out images based on iphone/ipad. Xcode renders your pdf to png's #1x, #2x and #3x and your app will pick the correct png based on the resolution of the device. You are correct that these are no longer vector assets and that scaling them up could leave you with blurry/pixelated images. You have a couple of choices:
1) include a scaled up version of your image at its maximum scale in app and use this version only when you need to scale up (otherwise its a waste of memory and processing if you are always rendering a much smaller image). This is probably the easiest solution.
2) leave your assets as vectors and load them as vectors, You still can render them to images for performance at a constant scale or range of scales, but you can always re-render them at any scale if needed. Most likely you want to use an SVG library for this.
3) You can directly import your assets as code using a program such as paint code. There used to be similar plugins for illustrator but I haven't seen one for Swift 3/Illustrator CC. This is obviously faster than #2 since there is no need to decode the vector file. If your file has a lot of overdraw you may still want to rasterize to images for performance.
Here's what I've found from my experience:
1) Xcode does not generate #2x and #3x from .png files. It can't really - you need to manually supply #1x, #2x, and #3x sizes.
2) Whatever size you use for the CGSize(...), that should be your #1x image, then generate #2x, and #3x from that. I started by designing the size of a level in the scene editor, then made a generic SKSpriteNode shape just to get the size I wanted, then I started making the image from the size I found that looks good.
3) Xcode supports vector based graphics (svg, pdf), but you can't use them as part of a texture atlas, which makes them much less useful in my opinion.

supplying the right image size when not knowing what the size will be at runtime

I am displaying a grid of images (3rows x 3 columns) in collection view. Each image is a square and its width is determined to be 1/3 of collectionView's width. Collection view is pinned to left and right margin of the mainView.
I do not know what the image height and width will be at runtime, because of different screen sizes of various iPhones. For example each image will be 100x100 display pixels on 5S, but 130x130 on 6+. I was advised to supply images that exactly matches the size on screen. Bigger images often tend to become pixelate and too sharp when downsized. How does one tackle such problem?
The usual solution is to supply three versions, for single-, double-, and triple-resolution screens, and downsize in real time by redrawing with drawInRect into a graphics context when the image is first needed.
I do not know what the image height and width will be at runtime, because of different screen sizes of various iPhones. For example each image will be 100x100 display pixels on 5S, but 130x130 on 6+
Okay, so your first sentence is a lie. The second sentence proves that you do know what the size is to be on the different screen sizes. Clearly, if I tell you the name of a device, you can tell me what you think the image size should be. So, if you don't want to downscale a larger image at runtime because you don't like the resulting quality, simply supply actual images at the correct size and resolution for every device, and use the correct image on the actual device type you find yourself running on.
If your images are photos or raster type images created using a raster drawing tool, then somewhere you will have to scale the original to the sizes you want. You can either do this while running in iOS, or create sets up front using a tool which can give you better scaling results. Unfortunately, the only perfect image will be the original with everything else being a distortion of the truth.
For icons, the only accurate rendering solution is to use vector graphics. Tools like Adobe Illustrator will let you create images which you can scale to different sizes without losing clarity. Unfortunately this still leaves you generating images up front. You can script this generation using most tools and given you said your images were all square, then the total number needed is not huge. At most you need 3 for iPhone (4/5 are same width, 6 and 6+) and 2 for iPad (#1 for mini/ipad1 and #2 for retina).
Although iOS has no direct support I know of for vector image rendering, there are some 3rd party tools. http://www.paintcodeapp.com/ is an example which seems to let you import vector images or draw vector images and then generate image code to run in your app. This kind of tool would give you what you want as the images are now vector drawings drawn at the scale you choose at run time. $99 though.
There is also the SVGKit (https://github.com/SVGKit/SVGKit), but not sure how good/bad this is. It seems to let you simply load and render direct from SVG files. Might be worth trying.
So in summary, I think you either generate the relatively small subset up front using a tool you can control the output from, take the hit in iOS and let it scale the images or use a 3rd party vector to image rendering kit which would give you what you want.

iOS app crashes because images use too much ram

I know this is a stupid problem, but this is my first real app that I have to make, I have no one to ask and I looked up this problem and found no other similar problems.
My app crashes on real devices with no exception. I saw in the simulator that uses too much RAM and after a while I got to the conclusion that the pictures I am using are to blame.
The app is structured in this way: it has 8 viewControllers for different things: for example, it starts with one which lets the user select the avatar with which he/she will play and here I have two pictures, next is a viewController which shows the stats for that avatar and here it is another picture and so on. The problem is that each picture uses 40MB of RAM to be displayed and things add up so the app uses more than 300MB of RAM when the user gets to the gameviewCOntroller where the game is. Because of this, on devices like iPAD 2 or iphone 4 it crashes, but not on iphone 5.
I tried to set the images both from "images.xcassets" and from a ".atlas" folder, but the result is exactly the same. The pictures have a dimension of no more than 1500x1999px, they are in png format.
Also, I saw that if the app were to start directly into the gaveViewController it would use 180MB so the other viewController remain in memory or something like that. Should I "clear" them or something similar?
//-------update-------
This is what I got from Instruments:
Memory is a big deal on mobile devices, there is not a clear answer to you question, but I can give you some advices:
If your images are plain colors or have symmetric axes use resizable images. You can just use one line of pixel multiplied by with or height to cover the entire screen using a small amount of memory
Image compression doens't have effects when the image is decompressed. So if you have a png that is 600kb and you are thinking that converting in a 300kb will lower memory usage is only true for "disk space" when an image is decompressed in memory the size is widthXheightXNumber_of_channelXbit_for_channel
resize images: if are loading a 2000px square image into memory and you show it inside an image view of 800 px square, resize before adding it.You will have just a peak while resizing, but later it will use less memory
If you need to use big images, use tiling techniques such as CATiledLayer
If you don't need an image anymore get rid of it. It's ok to have an array of path to images, but not an array of full uncompressed images
Avoid -imageNamed it caches images and even if Apple says that this cache is released under memory pressure, you don't have a lot of control on it and it could be too late to avoid a crash
Those are general advices, it's up to you if they fit your requirements.
You should definitely follow Andrea's advices.
Additionally you should consider setting the image size to exactly what your need is. You're saying that you've tried to set them from xcassets so you have full control over the images you're loading, which is great (compared to downloading an image that you cannot modify).
I highly suggest you read some documentation on using Asset catalog files. This will allow you to have high-resolution image for bigger screens that also have more memory, and smaller ones for older devices, which is what you want here.
Also, note that 1500x1999px is still a very big size for most mobile devices.
More links about screen-size:
https://developer.apple.com/library/ios/documentation/UserExperience/Conceptual/MobileHIG/IconMatrix.html
http://www.paintcodeapp.com/news/iphone-6-screens-demystified

What's the best way to use big textures (2048*1536) in Unity3d with NGUI on ios?

I'm using Unity3d (4.3.1) and NGUI for creating an 2d iOS (iPad) app. Also I need to use a lot of full screen images (about 100 images with size 2048x1536), for Gallery for example.
Now I'm using them with GUI type, override for iPhone with max size 2048 and compression quality: normal. And I'm using a UITexture with Unlit/Transparent shader to show them.
However, after about 40 images in the project XCode returns the terminated due to memory error. So the question is, what type of images do I need, and with which preferences to make them work?
I'm using iPad 3 as a test device with XCode 5.1.1. I'll be thankful for any help!
Also I need to use a lot of full screen images (about 100 images with size 2048x1536), for Gallery for example.
I think your 2048x2048 size images use a very huge memory area. Basically, 2048 image use 16MB memory. So, this case need to use about a 1600MB memory! Normal application don't over about 200 MB.
So, I think you need to be reduce using a memory:
Remember that this texture is going to be expand 2048x2048 by unity.( http://www.opengl.org/wiki/NPOT_Texture ) So, if you are going to reduce file size to 1500x1000, your application still use 2048x2048 image. But if you can reduce file size to 1024x1024, do it. 1024 image just use 4 MB memory.
If you can use texture compression. Use it. PVRTC 4 bit ( https://docs.unity3d.com/Documentation/Manual/ReducingFilesize.html ) compression is make file size 1/8 than true color. Also memory size is going to reduce.(maybe reduced to half)
If your application don't display all images, load image dynamically. Use thumb nail.
Good luck:D
If you want to make a gallery-like app to render photos maybe you can try a different approach:
create two large editable textures and fill texels with image data (it must be editable otherwise you will no have access to write directly image data into them).
if you still have memory issues or if you want to use lower memory you can use several smaller textures as tiles. You can render then image parts to each smaller texture. Remember to configurate correctly the texture borders or so not use border texels to avoid wrapping problems.
Best way is to use a smaller texture. In an ipad you will need a magnifying glass to really appreciate the difference between 1024x1024 and larger textures. Remember an ipad screen is smaller (7"~10") than a computer one and with filtering enabled is really hard to tell the difference.
If you still need manager such a large texture for some other reason (zooming or similar) I recommend you one of the following approaches:
split the texture into layers with alpha channel (transparency): usually backgrounds can be rendered with lower resolutions.
split also the texture into blocks: usually most textures have repeating patterns.
use compression.
Always avoid use such large textures if possible.

Resources