3D Model Problems - xna

When I add a model to my content and run the program I get the following error
Invalid texture. face 0 is sized 522X360 , but textures using DXT
compressed formats must be multiples of four
Can anyone can help me?
Thanks in Advance

The answer is exactly what it says: the dimensions of your texture image aren't multiples of four (should ideally be powers of two) - just resize your texture images.
Set the width and height both to 512 for best results. (Use an image editor like GIMP instead of MSPaint to get a clean scale that doesn't look weird)

Related

supplying the right image size when not knowing what the size will be at runtime

I am displaying a grid of images (3rows x 3 columns) in collection view. Each image is a square and its width is determined to be 1/3 of collectionView's width. Collection view is pinned to left and right margin of the mainView.
I do not know what the image height and width will be at runtime, because of different screen sizes of various iPhones. For example each image will be 100x100 display pixels on 5S, but 130x130 on 6+. I was advised to supply images that exactly matches the size on screen. Bigger images often tend to become pixelate and too sharp when downsized. How does one tackle such problem?
The usual solution is to supply three versions, for single-, double-, and triple-resolution screens, and downsize in real time by redrawing with drawInRect into a graphics context when the image is first needed.
I do not know what the image height and width will be at runtime, because of different screen sizes of various iPhones. For example each image will be 100x100 display pixels on 5S, but 130x130 on 6+
Okay, so your first sentence is a lie. The second sentence proves that you do know what the size is to be on the different screen sizes. Clearly, if I tell you the name of a device, you can tell me what you think the image size should be. So, if you don't want to downscale a larger image at runtime because you don't like the resulting quality, simply supply actual images at the correct size and resolution for every device, and use the correct image on the actual device type you find yourself running on.
If your images are photos or raster type images created using a raster drawing tool, then somewhere you will have to scale the original to the sizes you want. You can either do this while running in iOS, or create sets up front using a tool which can give you better scaling results. Unfortunately, the only perfect image will be the original with everything else being a distortion of the truth.
For icons, the only accurate rendering solution is to use vector graphics. Tools like Adobe Illustrator will let you create images which you can scale to different sizes without losing clarity. Unfortunately this still leaves you generating images up front. You can script this generation using most tools and given you said your images were all square, then the total number needed is not huge. At most you need 3 for iPhone (4/5 are same width, 6 and 6+) and 2 for iPad (#1 for mini/ipad1 and #2 for retina).
Although iOS has no direct support I know of for vector image rendering, there are some 3rd party tools. http://www.paintcodeapp.com/ is an example which seems to let you import vector images or draw vector images and then generate image code to run in your app. This kind of tool would give you what you want as the images are now vector drawings drawn at the scale you choose at run time. $99 though.
There is also the SVGKit (https://github.com/SVGKit/SVGKit), but not sure how good/bad this is. It seems to let you simply load and render direct from SVG files. Might be worth trying.
So in summary, I think you either generate the relatively small subset up front using a tool you can control the output from, take the hit in iOS and let it scale the images or use a 3rd party vector to image rendering kit which would give you what you want.

Glitching GPUImageAmatorkaFilter with images that are certain dimensions

Has anyone seen issues with image sizes when using GPUImage's GPUImageAmatorkaFilter?
It seems to be related to multiples of 4 - when the width and height aren't multiples of 4, it glitches the output.
For example, if I try and filter an image with width and height 749, it glitches.
If I scale it to 752 or 744, it works.
The weird thing is, it glitches at 748. Which is multiple of 4, but an un-even multiple (187).
The initial workaround is to do some calculations to make the image smaller, but its a rubbish solution, I'd obviously much prefer to be able to filter any size.
Before
After
GPUImageAmatorkaFilter use GPUImageLookupFilter with lookup_amatorka.png as lookup texture. This texture is organised as 8x8 quads of 64x64 pixels representing all possible RGB colors. I tested GPUImageAmatorkaFilter with image 749*749px and it works (first check your code is up-to-date). I believe you are using lookup texture of wrong size, it should be 512*512px.

What's the best way to use big textures (2048*1536) in Unity3d with NGUI on ios?

I'm using Unity3d (4.3.1) and NGUI for creating an 2d iOS (iPad) app. Also I need to use a lot of full screen images (about 100 images with size 2048x1536), for Gallery for example.
Now I'm using them with GUI type, override for iPhone with max size 2048 and compression quality: normal. And I'm using a UITexture with Unlit/Transparent shader to show them.
However, after about 40 images in the project XCode returns the terminated due to memory error. So the question is, what type of images do I need, and with which preferences to make them work?
I'm using iPad 3 as a test device with XCode 5.1.1. I'll be thankful for any help!
Also I need to use a lot of full screen images (about 100 images with size 2048x1536), for Gallery for example.
I think your 2048x2048 size images use a very huge memory area. Basically, 2048 image use 16MB memory. So, this case need to use about a 1600MB memory! Normal application don't over about 200 MB.
So, I think you need to be reduce using a memory:
Remember that this texture is going to be expand 2048x2048 by unity.( http://www.opengl.org/wiki/NPOT_Texture ) So, if you are going to reduce file size to 1500x1000, your application still use 2048x2048 image. But if you can reduce file size to 1024x1024, do it. 1024 image just use 4 MB memory.
If you can use texture compression. Use it. PVRTC 4 bit ( https://docs.unity3d.com/Documentation/Manual/ReducingFilesize.html ) compression is make file size 1/8 than true color. Also memory size is going to reduce.(maybe reduced to half)
If your application don't display all images, load image dynamically. Use thumb nail.
Good luck:D
If you want to make a gallery-like app to render photos maybe you can try a different approach:
create two large editable textures and fill texels with image data (it must be editable otherwise you will no have access to write directly image data into them).
if you still have memory issues or if you want to use lower memory you can use several smaller textures as tiles. You can render then image parts to each smaller texture. Remember to configurate correctly the texture borders or so not use border texels to avoid wrapping problems.
Best way is to use a smaller texture. In an ipad you will need a magnifying glass to really appreciate the difference between 1024x1024 and larger textures. Remember an ipad screen is smaller (7"~10") than a computer one and with filtering enabled is really hard to tell the difference.
If you still need manager such a large texture for some other reason (zooming or similar) I recommend you one of the following approaches:
split the texture into layers with alpha channel (transparency): usually backgrounds can be rendered with lower resolutions.
split also the texture into blocks: usually most textures have repeating patterns.
use compression.
Always avoid use such large textures if possible.

Calculate the height and width of image from the source file

I need to calculate the image width and height from the actual image file, so I'm reading the image with open file. so I have bunch of characters and numbers and everything that seems meaningless and they are presenting rgb information probably.
I just want to calculate the size of the image with the raw file information
I am programming in Erlang language but the code in any language will help as we are working with raw file as long as we don't use built-in libraries.
Thank you all in advance for help
I found the answer by going to details of each format,
So it works like this
JPG : you can find the width and height after the bytes "255,192,0,17,8" after that its the information for size
PNG : you can find it after "IHDR"
GIF : you can find it after "GIF89a"
there are information for more but this is the most common image types on internet
Thank you all for your time
I assume when you say 'raw' you mean you only have the pixel values.
In this case there isn't always a way to know the width and height.
Say you read 400 pixels. In this case a valid image side may be any whole factorization of 400, e.g. 1x400, 2x200, 4x100, 8x50, 20x20 etc. and transposed as well.
Not to mention the fact that many image formats include some padding for pixel rows that are not multiples of 4, 8 or 16...
The way it is coded in the image file depend on the image type, which hopefully is also coded in the image file. you can have a look at the question Getting Image size of JPEG from its binary for an example with JPEG coding.
If your data is unknown, use Octave and load the image. Then take a look at this page:
http://www.gnu.org/software/octave/doc/interpreter/Displaying-Images.html
for commands to display images. Hopefully with some manipulation it will work. This works for raw images, though there are specific decoders. Once you understand how the image is, you can write the equivalent C code.

Converting between pt and pixels

My netbook has a monitor with 10.1 inch length and 1024*600 solution. I think 1pt is about 1/72inch - is the following computation right?
Since the resolution is 1024*600, the
diagonal has about 1186.83 pixels,
thus 1 inch is about 1186.83/10.1 =
117.51 pixels, and thus 1pt is about 117.51/72 = 1.63 pixels, or 1 pixel is about 0.6127 pt.
Using this relationship, I've inserted an image into a LaTeX document, converting pixels to pt, and take the result as a parameter to includegraphics, but the figure in the resulting document is rather blurred.
Is the computation correct? if
not, how or
where am I wrong?
How can I insert an image into a LaTeX document
with precisely the same dimensions as the
original?
Updated:
I'm using pdflatex to compile the document and the image is a png file, and the reason why I do such a stupid computation is that with no width parameter set, the image shown in the document is larger than the actual size, and I can't work out why.
LaTeX is for creating paper documents. The units it uses refer to distance on the paper output, not on the screen. So if you ask for a distance of 72pts in LaTeX, you'll get 1 inch on your printout, but the distance you get on your screen depends on the zoom-level of your pdf or ps reader, which probably doesn't know how big your screen is.
If you simply want to get the raster grid of your image to fit to the screen grid, I'd say the best thing to do is get a higher-quality graphic if you can.
(PS there's a TeX stack exchange site where you could ask your question too...)
I don't know if it will make any difference, but TeX actually uses 72.27 points to the inch (as it predates PostScript, which set the standard 72 points to the inch).

Resources