Empty WebGL context uses a lot of memory - memory

For example, for my 940M video card, the canvas created with the following code takes 500 MB of video memory
var c = document.createElement('canvas');
var ctx = c.getContext('webgl');
c.width = c.height = 4096;
At the same time, the OpenGL context of the same sizes uses only 100 MB of video memory:
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_SINGLE);
int s = 4096;
glutInitWindowSize(s, s);
glutCreateWindow("Hello world :D");
Why does the WebGL use so much memory? Is it possible to reduce the amount of used memory for the same sizes of the context?

As LJ pointed out, canvas is double buffered, antialiased, has alpha and a depth buffer by default. You made the canvas 4096 x 4096 so that's
16meg * 4 (RGBA) or 64meg for one buffer
You get that times at least 4
front buffer = 1
antialiased backbuffer = 2 to 16
depth buffer = 1
So that's 256meg to 1152meg depending on what the browser picks for antialiasing.
In answer to your question you can try to not ask for a depth buffer, alpha buffer and/or antialiasing
var c = document.createElement('canvas');
var ctx = c.getContext('webgl', { alpha: false, depth: false, antialias: false});
c.width = c.height = 4096;
Whether the browser actually doesn't allocate an alpha channel or does but just ignores it is up to the browser and driver. Whether it will actually not allocate a depth buffer is also up to the browser. Passing antialias: false should at least make the 2nd buffer 1x instead of 2x to 16x.

Related

Direct3D11: Flipping ID3D11Texture2D

I perform a capture of Direct3D back buffer. When I download the pixels the image frame is flipped along its vertical axis.Is it possible to "tell" D3D to flip the frame when copying resource,or when creating target ID3D11Texture2D ?
Here is how I do it:
The texture into which I copy the frame buffer is created like this:
D3D11_TEXTURE2D_DESC description =
{
desc.BufferDesc.Width, desc.BufferDesc.Height, 1, 1,
DXGI_FORMAT_R8G8B8A8_UNORM,
{ 1, 0 }, // DXGI_SAMPLE_DESC
D3D11_USAGE_STAGING,//transder from GPU to CPU
0, D3D11_CPU_ACCESS_READ, 0
};
D3D11_SUBRESOURCE_DATA data = { buffer, desc.BufferDesc.Width * PIXEL_SIZE, 0 };
device->CreateTexture2D(&description, &data, &pNewTexture);
Then on each frame I do:
pSwapChain->GetBuffer(0, __uuidof(ID3D11Texture2D), reinterpret_cast< void** >(&pSurface));
pContext->CopyResource(pNewTexture, pSurface);
D3D11_MAPPED_SUBRESOURCE resource;
pContext->Map(pNewTexture, 0, D3D11_MAP_READ , 0, &resource);
//reading from resource.pData
//...
PS: I don't have a control of the rendering pipeline. I hook an external app with this code.
Also,I don't want to mess with the pixel buffer on the CPU, like reverse copy in a loop etc.. The low latency of the copy is high priority.
UPDATE:
I also tried this:
D3D11_BOX box;
box.left = 0;
box.right = desc.BufferDesc.Width;
box.top = desc.BufferDesc.Height;
box.bottom = 0;
box.front = 0;
box.back = 1;
pContext->CopySubresourceRegion(pNewTexture, 0, 0, 0, 0, pSurface, 0, &box);
Which causes the frame to be empty from its content.
Create a texture with D3D11_USAGE_DEAFULT, with CPUAccessFlags=0 and BindFlags=D3D11_BIND_SHADER_RESOURCE. CopyResource the swapchain's backbuffer to it. Create another texture with D3D11_BIND_RENDER_TARGET. Set it as a render target, set a pixel shader and draw a flipped quad using the first texture. Now you should be able to CopyResource the second texture to the staging texture that you use now. This should be faster than copying a flipped image data using the CPU. However, this solution would take more resources on the GPU and might be hard to setup in a hook.
All Direct3D mapped resources should be processed scanline-by-scanline, so just reverse the copy:
auto ptr = reinterpret_cast<const uint8_t>(resource.pData)
+ (desc.BufferDesc.Height - 1) * resource.RowPitch;
for(unsigned int y = 0; y < desc.BufferDesc.Height; ++y )
{
// do something with the data in ptr
// which is desc.BufferDesc.Width * BytesPerPixel(desc.Format) bytes
// i.e. DXGI_FORMAT_R8G8B8A8_UNORM would be desc.BufferDesc.Width * 4
ptr -= resource.RowPitch;
}
For lots of examples of working with Direct3D resources, see DirectXTex.

Why I got a negative value when sample a 3D texture and the data i set to texture are all positive?

ID3D11Texture3D* pTexture3D;
D3D11_SUBRESOURCE_DATA initialData;
initialData.pSysMem = data;
initialData.SysMemPitch = 32 * 2;
initialData.SysMemSlicePitch = 32 * 32 * 2;
hr = g_pd3dDevice->CreateTexture3D( &texDesc, &initialData, &pTexture3D );
Like the code show that i want to load a data to a 3D texture, and the initialData loaded data from a file and the value are all positive.
But when i sampling the 3D texture, i got a negative value.
sampling code is:
g_Texture3D.SampleLevel(BilinearWrappedSampler, uvw, 0);

why the CGImageGetBytesPerRow() method return a weird value on some images?

I got an image from a bigger image by
let partialCGImage = CGImageCreateWithImageInRect(CGImage, frame)
but sometimes I got wrong RGBA value. For example, I calculated the average red values of an image, but it turned out like a gray image.
So I checked the info as follow.
image width: 64
image height: 64
image has 5120 bytes per row
image has 8 bits per component
image color space: <CGColorSpace 0x15d68fbd0> (kCGColorSpaceICCBased; kCGColorSpaceModelRGB; sRGB IEC61966-2.1)
image is mask: false
image bitmap info: CGBitmapInfo(rawValue: 8194)
image has 32 bits per pixel
image utt type: nil
image should interpolate: true
image rendering intent: CGColorRenderingIntent
Bitamp Info: ------
Alpha info mask: Ture
Float components: False
Byte oder mask: Ture
Byte order default: False
Byte order 16 little: False
Byte order 32 little: Ture
Byte order 16 big: Ture
Byte order 32 big: False
Image Info ended---------------
Then I got a really weird problem, why the width and height are both 64 pxs, and the image has 8 bits(1 byte) per component(4 bytes per pixel), but the bytes per row is 5120?
And I notice the bitmap info of the normal image is quite different, it doesn't has any byte order infomation.
I googled the different between little endian and big endian, but I got confused when they showed up together.
I really need help since my project has already delayed for 2 days because of that. Thanks!
By the way, I used following code to get the RGBA value.
let pixelData=CGDataProviderCopyData(CGImageGetDataProvider(self.CGImage))
let data:UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
var rs: [[Int]] = []
var gs: [[Int]] = []
var bs: [[Int]] = []
let widthMax = imageWidth
let heightMax = imageHeight
for indexX in 0...widthMax {
var tempR: [Int] = []
var tempG: [Int] = []
var tempB: [Int] = []
for indexY in 0...heightMax {
let offSet = 4 * (indexX * imageWidth + indexY)
**let r = Int(data[pixelInfo + offSet])
let g = Int(data[pixelInfo + 1 + offSet])
let b = Int(data[pixelInfo + 2 + offSet])**
tempR.append(r)
tempG.append(g)
tempB.append(b)
}
rs.append(tempR)
gs.append(tempG)
bs.append(tempB)
}
Ask me if you have problem with my code. Thank you for help.
The bytes-per-row is 5120 because you used CGImageCreateWithImageInRect on a larger image. From the CGImage reference manual:
The resulting image retains a reference to the original image, which means you may release the original image after calling this function.
The new image uses the same pixel storage as the old (larger) image. That's why the new image retains the old image, and why they have the same bytes-per-row.
As for why you're not getting the red values you expect: Rob's answer has some useful information, but if you want to explore deeper, consider that your bitmap info is 8194 = 0x2002.
print(CGBitmapInfo.ByteOrder32Little.rawValue | CGImageAlphaInfo.PremultipliedFirst.rawValue)
# Output:
8194
These bits determine the byte order of your bitmap. But those names aren't all that helpful. Let's figure out exactly what byte order we get for those bits:
let context = CGBitmapContextCreate(nil, 1, 1, 8, 4, CGColorSpaceCreateDeviceRGB(), CGBitmapInfo.ByteOrder32Little.rawValue | CGImageAlphaInfo.PremultipliedFirst.rawValue)!
UIGraphicsPushContext(context)
let d: CGFloat = 255
UIColor(red: 1/d, green: 2/d, blue: 3/d, alpha: 1).setFill()
UIRectFill(.infinite)
UIGraphicsPopContext()
let data = UnsafePointer<UInt8>(CGBitmapContextGetData(context))
for i in 0 ..< 4 {
print("\(i): \(data[i])")
}
# Output:
0: 3
1: 2
2: 1
3: 255
So we can see that a bitmap info of 8194 means that the byte order is BGRA. Your code assumes it's RGBA.
In addition to the question about pixelInfo, raised by Segmentation, the calculation of offSet seem curious:
let offSet = 4 * (indexX * imageWidth + indexY)
The x and y values are backwards. Also, you also cannot assume that the bytes per row is always equal to 4 times the width in pixels because some image formats pad bytes per row. Anyway, it theoretically it should be:
let offSet = indexY * bytesPerRow + indexX * bytesPerPixel
Also note that in addition to the x/y flip issue, you don't want 0 ... widthMax and 0 ... heightMax (as those will return widthMax + 1 and heightMax + 1 data points). Instead, you want to use 0 ..< widthMax and 0 ..< heightMax.
Also if you're dealing with random image files, there are other deeper problems here. For example, you can't make assumptions regarding RGBA vs ARGB vs CMYK, big endian vs little endian, etc., captured in the bitmap info field.
Rather than writing code that can deal with all of these variations in pixel buffers, Apple suggests alternative to take the image of some random configuration and render it to some consistent context configuration, and then you can navigate the buffer more easily. See Technical Q&A #1509.
First of all you haven't initialize pixelInfo variable. Second you aren't doing anything with the A value shifting everything 8bits to the left. Also i don't think you need pixelInfo and offset, these two variables are the same so keep one of them equal to what you wrote for offset

SpriteKit SKTexture.preloadTextures high memory usage Swift

I have a SKTextureAtlas with about 90 PNG Images. Every Image has a resolution of 2000 x 70 pixel and has a size of ~1 KB.
Now I put this images from the Atlas into an array like this:
var dropBarAtlas = SKTextureAtlas(named: "DropBar")
for i in 0..<dropBarAtlas.textureNames.count{
var textuteName = NSString(format: "DropBar%i", i)
var texture = dropBarAtlas.textureNamed(textuteName)
dropFrames.addObject(texture)
}
Then I preload the array with the textures in didMoveToView:
SKTexture.preloadTextures(dropFrames, withCompletionHandler: { () -> Void in})
To play the animation with 30 fps I use SKAction.animateWithTextures
var animateDropBar = SKAction.animateWithTextures(dropFrames, timePerFrame: 0.033)
dropBar.runAction(animateDropBar)
My problem is that when I preload the textures the memory usage increases to about 300 MB.
Is there a more performant solution?
And which frame rate and image size is recommended for SKAction.animateWithTextures?
You should keep in mind that image file size (1Kb in your example) have nothing with amount of memory required for same image to be stored in RAM . You can calculate that amount of memory required with this formula:
width x height x bytes per pixel = size in memory
If you using standard RGBA8888 pixel format this means that your image will require about 0.5 megabytes in RAM memory, because RGBA8888 uses 4 bytes per pixel – 1 byte for each red, green, blue, and 1 byte for alpha transparency. You can read more here.
So what you can do, is to optimize you textures and use different texture formats. Here is another example about texture optimization.

reducing bpp of an image using Scilab

Hi guys,
I'm trying to reduce the number of bits per pixel to below 8, on gray scale images using Scilab
Is this possible?
If so, how can I do this?
Thank you.
I think it is not possible. The integer types available in Scilab are one or multiple bytes, see types here.
If you are looking to loose the high frequency information, you could shift out information.
Pseudo implementation
for x=1:width
for y=1:height
// Get pixel and make a 1 byte integer
pixel = int8(picture(x,y))
//Display bits
disp( dec2bin(pixel) )
// We start out with 8 bits - 4 = 4 bits info
bits_to_shift = 4
shifted_down_pixel = pixel/(2^bits_to_shift)
//Display shifted down
disp( dec2bin(shifted_down_pixel))
//Shift it back
shifted_back_pixel = pixel*(2^bits_to_shift)
disp( dec2bin(shifted_back_pixel))
// Replace old pixel with new
picture(x,y) = shifted_back_pixel
end
end
Of course you can do the above code much faster with one big matrix operation, but it is to show the concept.
Working example
rgb = imread('your_image.png')
gry = rgb2gray(rgb)
gry8bit = im2uint8(gry)
function result = reduce_bits(img, bits)
reduced = img / (2^bits);
result = reduced * (2^bits);
return result;
endfunction
gry2bit = reduce_bits(gry8bit, 6)
imshow(gry2bit)

Resources