I have the following Windows code that spawns two threads then waits until they have both completed:
hThreads[0] = _beginthread(&do_a, 0, p_args_a);
hThreads[1] = _beginthread(&do_b, 0, p_args_b);
WaitForMultipleObjects(2, hThreads, TRUE, INFINITE);
I am now porting the same code to use pthreads but am unsure how to do the equivalent of WaitForMultipleObjects:
pthread_create(&hThreads[0], 0, &do_a, p_args_a);
pthread_create(&hThreads[1], 0, &do_b, p_args_b);
???
Is there an equivalent way, using pthreads, to achieve the same functionality?
If you want to wait for all, as you're doing here, you can simply call pthread_join() for each thread. It will accomplish the same thing.
pthread_create(&hThreads[0], 0, &do_a, p_args_a);
pthread_create(&hThreads[1], 0, &do_b, p_args_b);
pthread_join(hThreads[0], NULL);
pthread_join(hThreads[1], NULL);
You can get fancy and do this in a for loop if you've got more than a couple of threads.
I always just used a for loop with pthread_join
int i;
for(i=0;i<threads;i++)
pthread_join(tids[i], NULL);
Related
So I am very new to DirectX and are trying to learn the basics but I'm running into some problem with my constant buffer. I'm trying to send a struct with three matrices to the vertex shader, but when I try to update the buffer with UpdateSubresource I get "Exception is thrown at 0x710B5DF3 (d3d11.dll) in Demo.exe: 0xC0000005: Access violation reading location 0x0000003C".
My struct:
struct Matracies
{
DirectX::XMMATRIX projection;
DirectX::XMMATRIX world;
DirectX::XMMATRIX view;
};
Matracies matracies;
Buffer creation:
ID3D11Buffer* ConstantBuffer = nullptr;
D3D11_BUFFER_DESC Buffer;
memset(&Buffer, 0, sizeof(Buffer));
Buffer.BindFlags = D3D11_BIND_CONSTANT_BUFFER;
Buffer.Usage = D3D11_USAGE_DEFAULT;
Buffer.ByteWidth = sizeof(Matracies);
Buffer.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
D3D11_SUBRESOURCE_DATA data;
data.pSysMem = &matracies;
data.SysMemPitch = 0;
data.SysMemSlicePitch = 0;
Device->CreateBuffer(&Buffer, &data, &ConstantBuffer);
DeviceContext->VSSetConstantBuffers(0, 1, &ConstantBuffer);
Updating buffer:
DeviceContext->UpdateSubresource(ConstantBuffer, 0, 0, &matracies, 0, 0);
I am not sure what information is relevant to solve this so let me know if anything is missing.
Welcome to the wooly world of DirectX!
The first two steps in debugging any DirectX program are:
(1) Enable the Debug device. See this blog post. This will generate additional debug output at runtime which gives hints about problems like the one you have above.
(2) If a function returns an HRESULT, you must check that for success or failure at runtime. If it was safe to ignore the return value, it would return void. See this page.
If you had done either or both of the above, you would have caught the error returned from CreateBuffer above which resulted in ConstantBuffer still being a nullptr when you called UpdateSubresource.
The reason it failed is that you can't in general create a constant buffer that is both D3D11_USAGE_DEFAULT and D3D11_CPU_ACCESS_WRITE. DEFAULT usage memory is often in video memory that is not accessible to the CPU. Since you are using UpdateSubresource as opposed to Map, you should just use:
Buffer.CPUAccessFlags = 0;
You should take a look at DirectX Tool Kit and it's associated tutorials.
I have an EXC_BAD_ACCESS at the last line of this code (this code is fired several times per second), but I cannot figure out what is the problem:
[EAGLContext setCurrentContext:_context];
glActiveTexture(GL_TEXTURE0);
glPixelStorei(GL_PACK_ALIGNMENT, 1);
glBindTexture(GL_TEXTURE_2D, _backgroundTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, _outputFrame.cols, _outputFrame.rows, 0, GL_BGRA, GL_UNSIGNED_BYTE, _outputFrame.data);
When debugging I make sure that the texture is created (the id is > 0), output frame has a valid pointer to the data and is a 4 channel matrix. I am inside the drawRect method of a GLKViewController. I think I should not have to bind the framebuffer as it is one of the things that are automated here. It doesn't crash at the first frame, but a few dozens frames later.
Can anybody spot the problem?
UPDATE:
It seems it's because of a race condition on _outputFrame, it's being updated while being read by glTexImage2D. I will try to lock it for read, then report back.
That was the solution indeed (see UPDATE), I fixed it with NSLock. Firstly I swapped the instance variable _outputFrame with a temporary one that gets updated from another thread and used the lock to update the instance variable:
[_frameLock lock];
_outputFrame = temp;
[_frameLock unlock];
Then used the lock when I wanted to read from the instance variable:
glActiveTexture(GL_TEXTURE0);
glPixelStorei(GL_PACK_ALIGNMENT, 1);
glBindTexture(GL_TEXTURE_2D, _backgroundTexture);
[_frameLock lock];
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, _outputFrame.cols, _outputFrame.rows, 0, GL_BGRA, GL_UNSIGNED_BYTE, _outputFrame.data);
[_frameLock unlock];
I just figured out some problem like this after several days.
1. better avoid rendering in multi-thread
2. better render in GLKView with base affect, and don't manually manage framebuffer& render buffer by yourself
3. base effect render raw pixel data like this
My solution:
glTexImage2D(...);
self.baseEffect.texture2d0.envMode = GLKTextureEnvModeReplace;
self.baseEffect.texture2d0.target = GLKTextureTarget2D;
self.baseEffect.texture2d0.name = texture;
self.baseEffect.texture2d0.enabled = YES;
self.baseEffect.useConstantColor = YES;
my app runs just fine, but after about 3 minutes i get a strange crash that looks like this
has anyone experienced something like this before and know what could be the cause? could this be some kind of memory leak?
some code:
- (void) draw {
[EAGLContext setCurrentContext:context];
glBindVertexArrayOES(_vertexArray);
shader.modelViewMatrix = mvm;
[shader texture:texture];
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, 0);
glBindVertexArrayOES(0);
}
- (void) texture: (int) tex {
glUseProgram(TextureShader);
_camModelViewMatrix = GLKMatrix4Multiply(_cameraMatrix, _modelViewMatrix);
_modelViewProjectionMatrix = GLKMatrix4Multiply(_projectionMatrix, _camModelViewMatrix);
glUniformMatrix4fv(mvp, 1, 0, _modelViewProjectionMatrix.m);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, textures[tex]);
}
if you need to see any other code let me know
I haven't found good documentation on the EXC_??? exception, but my understanding is that the thread has been consuming CPU for too long. The best explanation of this problem I've found is in another question on stackoverflow: GCD crashes with any task longer than 255 seconds. I have had this problem when writing long testcases, and I fixed them by breaking them up into smaller testcases or improving the performance of the testcase with the EXC_???. It's worth it to take a look at the stack when the EXC_??? happens and consider whether improvements could be made to speed up the path.
i want to achieve that the user can go through the slices of a volume but to guarantee a little bit more of orientation i'd like to draw the outlines of a cube which represent the dimensions of a volume.
what i think i need to do:
1) get the dimensions of the volume
2) start drawing lines from e.g. [0,0,0] to [0,1,0] from [0,1,0] to [1,1,0] and from [1,1,0] to [1,0,0] and back again to [0,0,0] and so on...
is there an easy way to draw a line in xtk? like using something similar like the sphere-constructor here?
example (black outlines):
cube
thanks in advance
In X.slice, we create the borders of the current slice like this.
var borders = new X.object();
borders._points.add(point0.x, point0.y, point0.z); // 0
borders._points.add(point1.x, point1.y, point1.z); // 1
borders._points.add(point1.x, point1.y, point1.z); // 1
borders._points.add(point4.x, point4.y, point4.z); // 4
borders._points.add(point4.x, point4.y, point4.z); // 4
borders._points.add(point2.x, point2.y, point2.z); // 2
borders._points.add(point2.x, point2.y, point2.z); // 2
borders._points.add(point0.x, point0.y, point0.z); // 0
borders._normals.add(0, 0, 0);
borders._normals.add(0, 0, 0);
borders._normals.add(0, 0, 0);
borders._normals.add(0, 0, 0);
borders._normals.add(0, 0, 0);
borders._normals.add(0, 0, 0);
borders._normals.add(0, 0, 0);
borders._normals.add(0, 0, 0);
borders._color = [1, 0, 0];
# set the drawing type to lines
borders._type = X.displayable.types.LINES;
borders._linewidth = 2;
This is an example of internal usage right now but it should be possible to do the same with the public API.
Ah I just see that the type getter/setter does not exist yet. We need to create it to enable setting the type externally. So I just created an Issue for that https://github.com/xtk/X/issues/62
Feel free to contribute it :) Should be easy :)
I have an array of Int16[19200]
I want to turn it into an Image[160,120,1]
What is the fastest way of doing this?
I need to do it at 120fps, so it needs to be really efficient.
Thanks
SW
Found it:
GCHandle handle = GCHandle.Alloc(dataArray, GCHandleType.Pinned);
IntPtr imageHeaderForBytes = Marshal.AllocHGlobal(Marshal.SizeOf(typeof(MIplImage)));
CvInvoke.cvInitImageHeader(
imageHeaderForBytes,
new Size(160, 120),
Emgu.CV.CvEnum.IPL_DEPTH.IPL_DEPTH_16S, 1, 0, 4);
Marshal.WriteIntPtr(
imageHeaderForBytes,
(int)Marshal.OffsetOf(typeof(MIplImage), "imageData"),
handle.AddrOfPinnedObject());
CvInvoke.cvCopy(imageHeaderForBytes, EMGUImage.Ptr, IntPtr.Zero);
Marshal.FreeHGlobal(imageHeaderForBytes);
handle.Free();