UMat frame,gray;
VideoCapture cap(0);
if(!cap.isOpened())
return -1;
for(i=0;i<10;i++)
{
cap >> frame;
Canny(frame, frame, 0, 50);
imshow("canny", frame);
}
return 0;
here my doubt is that if the loop is running for 10 times and in
line-11 I am applying the canny filter, but the src and dst are same(frame) so it will be inplace operation, so at each iteration what will happen in case of the memory
allocations and deallocation!!
will there will 9 memory locations with no header pointing to it,
or in every loop the memory occupied by frame matrix data will be
deallocated,
or in every loop i have to call release(), to manually deallocate the
matrix
when the canny filter is applied will the result data replace the old matrix data, or it will allocate a new set of memory for the result data and pointing to it, and if so what will happen to the old matrix data?
The following line:
UMat frame
does not allocate any significant image memory. It just creates a header on the stack with space for:
the number of rows,
and columns in the image,
the image type,
a reference count, and
a pointer that will eventually point to the image's pixels, but for the moment points to nothing.
On entry to the loop, the following line:
cap >> frame;
will allocate sufficient memory on the heap for the image's pixels, and initialise the dimensions, the reference count and make the data pointer point to the allocated chunk of image memory - obviously it will also fill the pixel-data from the video source.
When you call Canny with:
Canny(frame, frame, 0, 50);
it will see that the operation is in-place and re-use the same Mat that contains frame and over-write it. No allocation, nor releasing is necessary.
The second, and subsequent, times you go around the loop, the line:
cap >> frame;
will see that there is already sufficient space allocated and load the data from the video stream into the same Mat, thereby over-writing the results of the previous Canny().
When you return from the function at the end, the heap memory for the pixel data is released and the stack memory for the header is given up.
TLDR; There is nothing to worry about - memory allocation and releasing are taken care of for you!
Related
I have two questions:
First, is there any more direct, sane way to go from a texture atlas image to a texture array in WebGL than what I'm doing below? I've not tried this, but doing it entirely in WebGL seems possible, though four-times the work and I still have to make two round trips to the GPU to do it.
And am I right that because buffer data for texImage3D() must come from PIXEL_UNPACK_BUFFER, this data must come directly from the CPU side? I.e. There is no way to copy from one block of GPU memory to a PIXEL_UNPACK_BUFFER without copying it to the CPU first. I'm pretty sure the answer to this is a hard "no".
In case my questions themselves are stupid (and they may be), my ultimate goal here is simply to convert a texture atlas PNG to a texture array. From what I've tried, the fastest way to do this by far is via PIXEL_UNPACK_BUFFER, rather than extracting each sub-image and sending them in one at a time, which for large atlases is extremely slow.
This is basically how I'm currently getting my pixel data.
const imageToBinary = async (image: HTMLImageElement) => {
const canvas = document.createElement('canvas');
canvas.width = image.width;
canvas.height = image.height;
const context = canvas.getContext('2d');
context.drawImage(image, 0, 0);
const imageData = context.getImageData(0, 0, image.width, image.height);
return imageData.data;
};
So, I'm creating an HTMLImageElement object, which contains the uncompressed pixel data I want, but has no methods to get at it directly. Then I'm creating a 2D context version containing the same pixel data a second time. Then I'm repopulating the GPU with the same pixel data a third time. Seems bonkers to me, but I don't see a way around it.
I have a texture which I use as a texture map. Its a 2048 by 2048 texture divided in squares of 256 pixels each. So I have 64 "slots". This map can be empty, partly filled or full. On screen I am drawing simple squares with a slot of the sprite map each.
The problem is that I have to update this map from time to time when the asset for the slot becomes available. These assets are being downloaded from the internet but the initial information arrives in advance so I can tell how many slots I will use and see the local storage to check which ones are already available to be drawn at the start.
For example. My info says there will be 10 squares, from these 5 are available locally so when the sprite map is initialized these squares are already filled and ready to be drawn. On the screen I will show 10 squares. 5 of them will have the image stored in the texture map for those slots, the remaining 5 are drawn with a temporal image. As a new asset for a slot is downloaded I want to update my sprite map (which is bound and used for drawing) with the new corresponding texture, after the draw is finished and the sprite map has been updated I set up a flag which tells OpenGL that it should start drawing with that slot instead of the temporal image.
From what I have read, there are 3 ways to update a sprite map.
1) Upload a new one with glTextImage2D: I am currently using this approach. I will create another updater texture and then simply swap it. But i frequently run into memory warnings.
2) Modify the texture with glTextSubImage2D: I cant get this to work, I keep getting memory access errors or black textures. I believe its either because the thread is not the same or I am accessing a texture in use.
3) Use Frame Buffer Objects: I could try this but I am not certain if i can Draw on my texturebuffer while it is already being used.
What is the correct way of solving this?
This is meant to be used on an iPhone so resources are limited.
Edit: I found this post which talks about something related here.
Unfortunately I dont think its focused on modifying a texture that is currently being used.
the thread is not the same
OpenGL-ES API is absolutely not multi-threaded. Update your texture from main thread.
Because your texture must be uploaded on gpu, glTextSubImage2D is the fastest and simplest path. Keep this direction :)
Render on a Frame Buffer (attached on your texture) is very fast for rendering data which are already on gpu. (not your case). And yes you can draw on a frame buffer bound to a texture (= a frame buffer which use the texture as color attachment).
Just one contrain: You can't read and write the same texture in one draw call (The texture attached to the current frame buffer can't be bound to a texture unit)
I am cropping an opencv Mat:
cv::Size size = img.size();
cv::Rect roi(size.width*/4., size.height/4.,size.width/2., size.height/.2);
img= img(roi);
I then use img.data pointer to create a vtkImageData (via vtkImageImport):
vtkSmartPointer<vtkImageImport> importer = vtkSmartPointer<vtkImageImport>::New();
importer->SetImportVoidPointer(img.data);
...
importer->Update();
vtkImageData* vtkImg = importer->GetOutput();
I don't get the expected result when I display the vtkImg. I've digged into opencv's code and the problem is that when creating the cropped data, opencv does not allocate a new pointer that is 4 times smaller but instead keeps the same already allocated block, advances the pointer upstream and flags the new img variable as not continuous. Therefore my vtk image still imports data from the original uncropped Mat. I know I could import the full image to vtkImageData and then do the cropping with a vtk filter but I would prefer not to.
Is there a way with opencv to obtain a cropped image that is "physically" cropped (with a newly allocated data pointer)?
Thank you
I believe you are looking for cv::Mat::clone(). It makes a deep copy of the underlying image data and returns a cv::Mat object which contains said data.
You would then change the line
img= img(roi);
to
img = img(roi).clone();
After which img contains only the cropped data.
Is it possible to create a VBO and reuse it between calls to glDrawElements in the same rendering cycle? (I tried and obtained weird results). The example below is missing bindings, etc.
Init code (executed only once) *:*
glGenBuffers(...)
glBufferData(...)
Render frame code (executed for each frame):
glMapBufferOES(...)
//... Update buffer from index 0 to X
glDrawElements(...)
//... Update buffer from index 0 to Y
glDrawElements(...)
[context presentRenderbuffer:GL_RENDERBUFFER_OES];
You need to unmap your buffer before drawing with it. If you don't unmap, that's probably why you're seeing weird results with glDrawElements.
http://www.opengl.org/sdk/docs/man/xhtml/glMapBuffer.xml
After glDrawElements is called, you can remap your buffer and fill it in again.
You will probably get better performance by not reusing the buffer right away. Remapping right after the draw will probably block until the draw is completed.
I am using a Texture2D object as a tilesheet. It's one-dimensional- one tile tall and however many tiles long.
I'm just curious if there's any disadvantage (Aside from being more difficult to edit, I guess) to having them laid out like this as opposed to a more compact way, e.g. instead of 64x1 tiles, make it 16x16. It wouldn't be very hard to change it, but I figure why bother if there's no hurt in having a long image!
The only real disadvantage is that you'll hit the maximum width (2048 on the Reach profile, 4096 on HiDef) of the texture with fewer sprites.
This isn't really a problem because, when it happens, it's so trivial to add support for more rows.
Seeing as you've asked the question, there is one obscure, performance-related thing that is at least interesting to be aware of, even though you almost never need to worry about it.
A texture's pixel data is stored in CPU memory in a row-by-row order. So if your tiles are stored horizontally, you try and read or write the pixels for a single tile (ie: GetData and SetData), you will be accessing many small, spread-out sections of memory. If you stored your tiles vertically, then each tile would occupy a single large section of memory - this has better cache coherency and can be copied with fewer operations (ie: it's faster).
This is not a problem on the GPU, where textures are stored with a layout such that all nearby pixels (not just the ones to the left and right) are stored nearby in memory.
Having a long image can make it easier to 'index' sprites in the sheet. For example, in your case, the image at index i is located at coordinates (w * i, 0), where w is the width of one sprite. Having it in a square would mean that you need more complex math to find the right sprite. It would be at (w * (i % 16), h * (i / 16)). (By the way, those coordinates are in pixels.)
So use the long image! It'll keep your code cleaner.