glGetTexImage returns all-black texture [closed] - image-processing

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 2 years ago.
Improve this question
I'm using the OpenGL API (v 4.6) to do some image processing stuff, its basically OpenGL image load store operations in a shader, I load the Texture via glBindImageTexture and do some processing, then I want to use glGetTexImage to read the contents of my updated image but while the function itself works fine it returns an all black image, which isnt really the case when its loaded in other shaders.
TLDR: Image works fine when loaded inside a shader (I use it in many and renderdoc displays it all fine in the GPU side) but it is all-black software side.
this is what my output should have been like using glGetTexImage (viewed with renderdoc after rendering)
use_shader(&shader);
glBindBuffer (GL_ARRAY_BUFFER, quad_vbo);
setInt(shader, "screen_width", window_width);
setInt(shader, "screen_height", window_height);
glBindImageTexture(0, head_list, 0, FALSE, 0, GL_READ_WRITE, GL_R32UI); //head list is the opengl texture id
glDrawArrays(GL_TRIANGLES, 0, 24);
glBindVertexArray(0);
glMemoryBarrier(GL_ALL_BARRIER_BITS);
//the image is just a GL_RED so one component
GLint *image_head= (GLint*)malloc(sizeof(u32) *window_width *window_height);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, head_list);
glGetTexImage(GL_TEXTURE_2D, 0, GL_RED, GL_UNSIGNED_INT, image_head);

Found it. GL_RED is for floating-point components only, by changing in all the code, meaning glTexImage and glGetTexImage GL_REDs to GL_RED_INTEGERs the issue is resolved!

Related

FFT in C++ AMP Throw CLIPBRD_E_CANT_OPEN error

I am trying to use C++ AMP in Visual C++ 2017 on Windows 10 (updated to the latest) and I find the archived FFT library from C++ AMP team on codeplex. I try to run the sample code, however the program throws ran out of memory error when creating DirectX FFT. I solve that problem by following the thread on Microsoft forum.
However, the problem doesn't stop. When the FFT library tries to create Unordered Access View, it throws error of CLIPBRD_E_CANT_OPEN. I did not try to operate on clipboard anyhow.
Thank you for reading this!
It seems I solve the problem. The original post mentioned that we need to create a new DirectX device and then create accelerator view upon it. Then I pass that view to ctor of fft as the second parameter.
fft(
concurrency::extent<_Dim> _Transform_extent,
const concurrency::accelerator_view& _Av = concurrency::accelerator().default_view,
float _Forward_scale = 0.0f,
float _Inverse_scale = 0.0f)
However, I still have crashes of the CLIPBRD_E_CANT_OPEN.
After reading the code, I realize that I need to create array on that DirectX views too. So I started to change:
array<std::complex<float>,dims> transformed_array(extend, directx_acc_view);
The idea comes from the different behaviors of create_uav(). The internal buffers and the precomputing caused no problem, but the samples' calls trigger the clipboard error. I guess the device matters here, so I do that change.
I hope my understanding is correct and anyway there is no such errors now.

webgl replace program shader

I'm trying to swap the fragement-shader used in a program. The fragment-shaders all have the same variables, just different calculations. I am trying to provide alternative shaders for lower level hardware.
I end up getting single color outputs (instead of a texture), does anyone have an idea what I could be doing wrong? I know the shaders are being used, due to the color changing accordingly.
//if I don't do this:
//WebGL: INVALID_OPERATION: attachShader: shader attachment already has shader
gl.detachShader(program, _.attachedFS);
//select a random shader, all using the same parameters
attachedFS = fragmentShaders[~~(Math.qrand()*fragmentShaders.length)];
//attach the new shader
gl.attachShader(program, attachedFS);
//if I don't do this nothing happens
gl.linkProgram(program);
//if I don't add this line:
//globject.js:313 WebGL: INVALID_OPERATION: uniform2f:
//location not for current program
updateLocations();
I am assuming you have called gl.compileShader(fragmentShader);
Have you tried to test the code on a different browser and see if you get the same behavior? (it could be standards implementation specific)
Have you tried to delete the fragment shader (gl.deleteShader(attachedFS); ) right after detaching it. The
previous shader may still have a pointer in memory.
If this does not let you move forward, you may have to detach both shaders (vertex & frag) and reattach them or even recreate the program from scratch
I found the issue, after trying about everything else without result. It also explains why I was seeing a shader change, but just getting a flat color. I was not updating some of the attributes.

Set unicoin amount [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
I've tried
StackExchange.uc.setBalance(99999999);
and it was nothing more than a visual thing.
Also, I bought the power up and now I want an inspirational answer. Because I fell that the unicorns would appreciate the going.
Q: Is there a faster way to mine unicoins or set them so I can get ALL the unicoins.
EDIT:
I feel like the unicorns aren't happy with the decision I've made of asking. They're outside my house.
Unicorn edit:
We've got him. Don't come looking, he's ours now.
go to the mining page, execute the following code in your console, just keep moving your mouse over the rocks.
$('#uc-rockcanvas').mousemove(function(event) {
for(var i = 0; i < 10; i++) {
var mousedownEvent = document.createEvent ("MouseEvent");
mousedownEvent.initMouseEvent ("mousedown", true, true, window, 0,
event.screenX, event.screenY, event.clientX, event.clientY,
event.ctrlKey, event.altKey, event.shiftKey, event.metaKey,
0, null);
event.target.dispatchEvent (mousedownEvent);
}
});

crash in cvBlobsLib VS2008

I am using cvBlobsLib library for a blob detection work, however, i am facing a crash in the program in the code.
My code crashes at this point:
CBlobResult blobs;
blobs = CBlobResult(imageSkinPixels, NULL, 0);
imageSkinPixels must be a single-channel image: in other words, you need to convert your RGB image to GRAY before passing it to CBlobResult().
Also, we don't know what is the type of imageSkinPixels, but it should be a pointer to IplImage.
That's all I can say with the code you shared. If this doesn't fix the problem you'll need to share more code.

Compressing BitmapData

The situation is this:
I've written a simple MovieClip replacement that converts an existing imported MovieClip to a sequence of BitmapData. This removes the requirement for Flash to render vector data in the MovieClip on each frame.
But BitmapData has a huge memory footprint. I've tried converting the BitmapData to a ByteArray and using the compress() method. This results in a significantly smaller memory footprint. But it has proven impractical. For each redraw, I tried uncompressing()'ing the ByteArray, then using SetPixels to blit the data to the screen, then re-compressing() the frame. This works but is terribly slow.
So I was wondering if anybody else has an approach I could try. In Flash, is it possible to compress bitmap data in memory and quickly blit it to the screen?
I wonder how native animated GIFs work in Flash. Does it uncompress them to BitmapData behind the scenes, or is frame decompression done on the fly?
Perhaps there is an Alchemy project that attempts to blit compressed images?
Thanks for any advice you can offer :)
#thienhaflash's response is good but has aged a year and since then Flash Player and AIR Runtime have expanded their capabilities. Today I stumbeled on this little tidbit from Adobe's AS3 Guide. As of player 11.3 there are native image compression techniques available. Here's a snippet:
// Compress a BitmapData object as a JPEG file.
var bitmapData:BitmapData = new BitmapData(640,480,false,0x00FF00);
var byteArray:ByteArray = new ByteArray();
bitmapData.encode(new Rectangle(0,0,640,480), new flash.display.JPEGEncoderOptions(), byteArray);
Not sure about the practicality for blitting but it's nice that it can be done natively.
For memory reservation you need to think twice before convert a MovieClip to a Bitmap sequence. Is it really that need ? Can you break things down as there are several things (like the background) is static (or just moving around) why don't cache bitmap for each elements instead of one big Bitmap sequence ?
I usually used AnimatedBitmap (the name for bitmap sequence alternative for a MovieClip) only for small size animated icons, and other heavy calculation stuffs (like fire / smoke effects ...). Just break things down as much as you can !
As far as i know, there are no way to compress the memory used by a BitmapData located in the memory and there are nothing related to Alchemy could help improve memory used in this case.
Animated GIF won't works in Flash natively, you will need some library to do that. Search for AnimatedGIF as3 library from bytearray.com, actually the library just read the gif file in raw byteArray and convert to an animatedBitmap just like how you've done.
this is an old question, but there is recent info on this : jackson Dunstan has had a run with bitmapdatas and it turns out that Bitmap data obtained from compressed sources will "deflate" after some time unused.
here are the articles : http://jacksondunstan.com/articles/2112, and the two referred at the beginning of it.
So you could absolutely do something like :
var byteArray:ByteArray = new ByteArray();
myBitmapData.encode(new Rectangle(0,0,640,480), new flash.display.JPEGEncoderOptions(), byteArray);
var loader = new Loader();
loader.addEventListener(Event.COMPLETE, function(_e:Event):void{
if(loader.content is Bitmap){
myBitmapData.dispose()
myBitmapData= Bitmap(loader.content).bitmapData;
}
});
loader.loadBytes(byteArray);
I'm not sure if it would work as is, and you definitely want to handle your memory better. but now, myBitmapData will be uncompressed when you try to read from it, and then re-compressed when you don't use it for about ten seconds.

Resources