What should I do if a texture formatted BC3_UNORM don't have a resolution multiple of 4? - directx

enter image description here
I have a texture of which resolution is 95x90 and format is DXGI_FORMAT_BC3_UNORM.
I try to load this texture but get an error below.
D3D11 ERROR: ID3D11Device::CreateTexture2D: A Texture2D created with the following Format (0x4d, BC3_UNORM) experiences aligment restrictions on the dimensions of the Resource. The dimensions, which are (Width: 95, Height: 90), must be multiples of (Width: 4, Height: 4). [ STATE_CREATION ERROR #101: CREATETEXTURE2D_INVALIDDIMENSIONS]
It makes no sense that I open the Painter apps and resize the original texture.
Is there nice way to fix this error?
+
If I load the texture using Visual Studio, it has dimension multiply of 4.
enter image description here
It is different when I load the texture with DirectX Tex.
auto hr = DirectX::CreateDDSTextureFromFile(Graphic::Device(), path.data(), &_texture, &_srv);

The formal rules of DirectX is that BC compressed images must have a multiple of 4 for the 'top-level' image width & height, but mipmaps can obviously result in non-multiple-of-4 values. There are also special rules for dealing with 1x1, 1x2, 2x1, and 2x2 pixel cases.
DirectXTex can block-compress artibrary sized images to support mipmaps.
I have a -fixbc4x4:switch in my texconv tool specifically for this case. It 'resizes' the top-most mip level without a decompression/compression cycle, but it will lose any mip levels so if you want to regenerate them you'll end up decompressing/compressing for that.
So in short this will create a version of the DDS texture with just the top-most level rounded up to 4x4 multiples without any modification of the blocks.
texconv -fixbc4x4 -m 1 inputdds.dds
This will fix the top-most level without any modifications of the blocks, decompress the image, generate mips, and then recompress.
texconv -fixbc4x4 -m 0 inputdds.dds

Related

ImageJ TIFF import image dimensions?

I have a bunch of auto-scanned slides using a slide scanner (Hamamatsu), which I can export from the NDPview software at different magnifications. So far, I have been zooming in to where I get the best resolution of my region on interest and add a scale bar for 1mm (as 1000 um) using the native scale bar option in the NDP view software. I then export the "view" from NDPview to TIFF. This TIFF is then imported into ImageJ (Fiji) where I set the scale using the scale bar I drew. This has been working well, but with over 500 images to do it's a bit of a pain.
Since the TIFF imports to ImageJ with inchxinch dimensions, I figured I can go to Image -> Properties and just change the unit of length to um. To test this, I selected an area to measure. I then compared this to my old method... and the values are completely different. Any idea why? 1 is the old method, 2 is the new method.
I made certain to "remove scale" in the scale bar window between each test. The whole image dimensions are different too:
If the images are all the same magnification and resolution, then as long as you know a measured distance (in pixels) and the physical distance (in microns or mm), you can set it using Analyze > Set Scale...
To do this in a macro you can use
run("Set Scale...", "distance=255 known=1 pixel=1 unit=micron");
where 255 is the distance in pixels for your known unit (1 micron). This can be applied to all TIFFs in a folder if you wrap that line in a loop operating on all TIFFs and save each resulting image.

MPSImageIntegral returns all zeroes when images are smaller

I have a Metal shader that processes an iPad Pro video frame to generate a (non-displayed) RGBA32Float image in a color attachment. That texture is then put through an MPSImageIntegral filter, encoded into the same command buffer as the shader, which results in an output image of the same size and format. In the command buffer’s completion handler, I read out the last pixel in the filtered image (containing the sum of all pixels in the input image) using this code:
let src = malloc(16) // 4 Floats per pixel * 4 bytes/Float
let region = MTLRegionMake2D(imageWidth - 1, imageHeight - 1, 1, 1) // last pixel in image
outputImage!.getBytes(src!, bytesPerRow: imageWidth * 16, from: region, mipmapLevel: 0)
let sum = src!.bindMemory(to: Float.self, capacity: 4)
NSLog("sum = \(sum[0]), \(sum[1]), \(sum[2]), \(sum[3])")
That works correctly as long as the textures holding the input and filtered images are both the same size as the IPad’s display, 2048 x 2732, though it's slow with such large images.
To speed it up, I had the shader generate just a ¼ size (512 x 683) RGBA32Float image instead, and use that same size and format for the filter’s output. But in that case, the sum that I read out is always just zeroes.
By capturing GPU frames in the debugger, I can see that the dependency graphs look the same in both cases (except for the reduced texture sizes in the latter case), and that the shader and filter work as expected in both cases, based on the appearance of the input and filtered textures as shown in the debugger. So why is it I can no longer successfully read out that filtered data, when the only change was to reduce the size of the filter's input and output images?
Some things I’ve already tried, to no avail:
Using 512 x 512 (and other size) images, to avoid possible padding artifacts in the 512 x 683 images.
Looking at other pixels, near the middle of the output image, which also contain non-zero data according to the GPU snapshots, but which read as 0 when using the smaller images.
Using a MTLBlitCommandEncoder in the same command buffer to copy the output pixel to a MTLBuffer, instead of, or in addition to, using getBytes. (That was suggested by the answer to this MacOS question, which is not directly applicable to iOS.)
I've found that if I change the render pass descriptor's storeAction for the shader's color attachment that receives the initial RGBA32Float input image from .dontCare to .store, then the code works for 512 x 683 images as well as 2048 x 2732 ones.
Why it worked without that for the larger images I still don't know.
I also don't know why this store action matters, as the filtered output image was already being successfully generated, even when its input was not stored.

Glitching GPUImageAmatorkaFilter with images that are certain dimensions

Has anyone seen issues with image sizes when using GPUImage's GPUImageAmatorkaFilter?
It seems to be related to multiples of 4 - when the width and height aren't multiples of 4, it glitches the output.
For example, if I try and filter an image with width and height 749, it glitches.
If I scale it to 752 or 744, it works.
The weird thing is, it glitches at 748. Which is multiple of 4, but an un-even multiple (187).
The initial workaround is to do some calculations to make the image smaller, but its a rubbish solution, I'd obviously much prefer to be able to filter any size.
Before
After
GPUImageAmatorkaFilter use GPUImageLookupFilter with lookup_amatorka.png as lookup texture. This texture is organised as 8x8 quads of 64x64 pixels representing all possible RGB colors. I tested GPUImageAmatorkaFilter with image 749*749px and it works (first check your code is up-to-date). I believe you are using lookup texture of wrong size, it should be 512*512px.

What's the best way to use big textures (2048*1536) in Unity3d with NGUI on ios?

I'm using Unity3d (4.3.1) and NGUI for creating an 2d iOS (iPad) app. Also I need to use a lot of full screen images (about 100 images with size 2048x1536), for Gallery for example.
Now I'm using them with GUI type, override for iPhone with max size 2048 and compression quality: normal. And I'm using a UITexture with Unlit/Transparent shader to show them.
However, after about 40 images in the project XCode returns the terminated due to memory error. So the question is, what type of images do I need, and with which preferences to make them work?
I'm using iPad 3 as a test device with XCode 5.1.1. I'll be thankful for any help!
Also I need to use a lot of full screen images (about 100 images with size 2048x1536), for Gallery for example.
I think your 2048x2048 size images use a very huge memory area. Basically, 2048 image use 16MB memory. So, this case need to use about a 1600MB memory! Normal application don't over about 200 MB.
So, I think you need to be reduce using a memory:
Remember that this texture is going to be expand 2048x2048 by unity.( http://www.opengl.org/wiki/NPOT_Texture ) So, if you are going to reduce file size to 1500x1000, your application still use 2048x2048 image. But if you can reduce file size to 1024x1024, do it. 1024 image just use 4 MB memory.
If you can use texture compression. Use it. PVRTC 4 bit ( https://docs.unity3d.com/Documentation/Manual/ReducingFilesize.html ) compression is make file size 1/8 than true color. Also memory size is going to reduce.(maybe reduced to half)
If your application don't display all images, load image dynamically. Use thumb nail.
Good luck:D
If you want to make a gallery-like app to render photos maybe you can try a different approach:
create two large editable textures and fill texels with image data (it must be editable otherwise you will no have access to write directly image data into them).
if you still have memory issues or if you want to use lower memory you can use several smaller textures as tiles. You can render then image parts to each smaller texture. Remember to configurate correctly the texture borders or so not use border texels to avoid wrapping problems.
Best way is to use a smaller texture. In an ipad you will need a magnifying glass to really appreciate the difference between 1024x1024 and larger textures. Remember an ipad screen is smaller (7"~10") than a computer one and with filtering enabled is really hard to tell the difference.
If you still need manager such a large texture for some other reason (zooming or similar) I recommend you one of the following approaches:
split the texture into layers with alpha channel (transparency): usually backgrounds can be rendered with lower resolutions.
split also the texture into blocks: usually most textures have repeating patterns.
use compression.
Always avoid use such large textures if possible.

3D Model Problems

When I add a model to my content and run the program I get the following error
Invalid texture. face 0 is sized 522X360 , but textures using DXT
compressed formats must be multiples of four
Can anyone can help me?
Thanks in Advance
The answer is exactly what it says: the dimensions of your texture image aren't multiples of four (should ideally be powers of two) - just resize your texture images.
Set the width and height both to 512 for best results. (Use an image editor like GIMP instead of MSPaint to get a clean scale that doesn't look weird)

Resources