I have an app I'm making where I would like to change a vector I'm creating from float to short. My code is in a header file like this:
vector<float> vertices;
and it works fine, but if I switch it to this:
vector<short> vertices;
and compile, it crashes with the following error:
malloc: *** error for object 0x1035804: incorrect checksum for freed object
- object was probably modified after being freed. *** set a breakpoint in
malloc_error_break to debug
I have no idea what's going on. If it helps, this is an OpenGL application I'm developing for the iPad.
I still don't know why my app wouldn't run when I changed my vector from float to short, but I solved the problem by creating a new vector object of shorts and using that instead. No more problems and works as expected.
Related
I would like to utilize OpenCV's integration of OpenGL/OpenCL to achieve fast distortion of images directly on the GPU while avoiding GPU/CPU image transfers. I can create a cv::ogl::Buffer from an OpenGL buffer object in Qt:
// m_pub is of type QOpenGLBuffer
cv::ogl::Buffer b(512, 512, CV_8UC4, m_pub.bufferId());
But the next line throws an exception:
cv::UMat m = cv::ogl::mapGLBuffer(b);
The error reported by OpenCV was originally:
OpenCV(4.5.5) Error: Unknown error code -220 (OpenCL:
clCreateFromGLBuffer failed) in cv::ogl::mapGLBuffer, file
D:\OpenCV\opencv-4.5.5\modules\core\src\opengl.cpp, line 1886
To get further information, I call cv::ocl::getOpenCLErrorString(status); in opengl.cpp, rebuild, and find that the error is CL_INVALID_CONTEXT.
I've checked cv::ocl::Context, cv::ocl::Device, cv::ocl::Platform, added cv::ocl::attachContext, all doesn't work. I'm stuck here, don't know how to go forward.
Any suggestions are really appreciated. Thanks.
It needs to call cv::ogl::ocl::initializeContextFromGL() in the initialization phase. My project is based on Qt, the window is inherited from QOpenGLWindow, so I call that function in initializeGL().
I guess that without this call, OpenCV would also create an OpenCL context automatically, but that's not the same as the one created from OpenGL, that's why the CL_INVALID_CONTEXT error occurred.
As a reminder, I used cv::remap to process a 512x512 image and found that processing on the GPU via OpenCL was not much faster than on the CPU. It still takes around 12ms on average. This is still too slow if the application requires a high frame rate and needs to leave time for other processing.
My app from time to time initializes a bunch of DirectX stuff and loads scenes, sometimes containing some large textures (up to 200–300 MB per texture). At first, everything works fine, but after a while FromMemory() just stops working, but only for big textures:
SlimDX.Direct3D11.Direct3D11Exception: E_FAIL: An undetermined error occurred (-2147467259)
at SlimDX.Result.Throw[T](Object dataKey, Object dataValue)
at SlimDX.Result.Record[T](Int32 hr, Boolean failed, Object dataKey, Object dataValue)
at SlimDX.Direct3D11.ShaderResourceView.ConstructFromMemory(Device device, Byte[] memory, D3DX11_IMAGE_LOAD_INFO* loadInformation)
at SlimDX.Direct3D11.ShaderResourceView.FromMemory(Device device, Byte[] memory)
Of course, I dispose all previously loaded ShaderResourceViews loaded before loading a new scene. But FromMemory() starts working again only after app’s restart. Could you please tell me what else could be wrong?
UPD:
With Texture2D.FromMemory(), I get this:
System.Runtime.InteropServices.SEHException (0x80004005): External component has thrown an exception.
at D3DX11CreateTextureFromMemory(ID3D11Device* , Void* , UInt32 , D3DX11_IMAGE_LOAD_INFO* , ID3DX11ThreadPump* , ID3D11Resource** , Int32* )
at SlimDX.Direct3D11.Resource.ConstructFromMemory(Device device, Byte[] memory, D3DX11_IMAGE_LOAD_INFO* info)
at SlimDX.Direct3D11.Texture2D.FromMemory(Device device, Byte[] memory)
And with native code debugging enabled:
Exception thrown at 0x748AA882 in app.exe: Microsoft C++ exception: std::bad_alloc at memory location 0x00AFC7C8.
Exception thrown: 'System.Runtime.InteropServices.SEHException' in SlimDX.dll
Sadly, I have no idea how D3DX11CreateTextureFromMemory() actually works and why does it try to re-allocate memory. Maybe it’s time to move to x64…
Found the problem. Turns out all I had to do is to add “LARGEADDRESSAWARE” flag to executable. Without it, 1 GB was the limit — quite easily achievable with 300 MB per texture.
Also, of course, since most of that data ended up in Large Object Heap, GCSettings.LargeObjectHeapCompactionMode = GCLargeObjectHeapCompactionMode.CompactOnce helped as well.
Sorry for wasting your time.
I'm working on a React Native 0.40 app that uses Realm JS 1.0.0. I am experiencing crashes when reading/writing and some odd behaviors. For example, one of the crashes only happens when I add a second record to the table.
When it crashes, the output prints:
<Error>: MY_APP(18085,0x7000075ce000) malloc: *** mach_vm_map(size=18446744073345937408) failed (error code=3)
*** error: can't allocate region
*** set a breakpoint in malloc_error_break to debug
(this is on the iOS simulator)
I've worked with Realm in Swift projects in the past where I had to be aware not to pass objects or collections around and also to not use the same realm from multiple threads, etc. However, I don't think those issues apply to RN. Right?
My general approach is to have a Realm factory like:
realm.js
export function getUserRealm(){
const realm = new Realm({
schema: combinedSchemas,
path: `main.realm`
});
console.log( "💿 User Realm Path:", realm.path );
return realm;
}
Then I call getUserRealm() from wherever it is that I'm using realm.
There are a few places where I do pass Results between functions and callbacks.
The main question is, what should I guard against in order to prevent crashes?
I am using Cocos2D 3.1 with SpriteBuilder and I am simply trying to get things working. I have set up my SpriteBuilder ccb with a physicsNode and put my physics objects within it.
In my app I then try to call this:
[_sprite.physicsBody applyImpulse:ccp(-95.0f, 2800.0f)];
All of a sudden, there is a SIGABRT and it crashes on this line in cpSpaceComponent.c:
cpAssertHard(cpBodyGetType(body) == CP_BODY_TYPE_DYNAMIC, "Internal error: Attempting to deactivate a non-dynamic body.");
Aborting due to Chipmunk error: Internal error: Attempting to
deactivate a non-dynamic body. Failed condition: cpBodyGetType(body)
== CP_BODY_TYPE_DYNAMIC
I have looked around and there is no documentation on this type of crash. I am not even sure where to begin to try to fix this. Does anyone know what I should do to fix this crash?
Fixed it. Turns out my CCPhysicsNode had a sleep time threshold of 0, once I set it to the normal .5, everything works fine.
to explain the situation: my OpenGL View Controller works fine when I install it via XCode (debugging mode so to speak) but crashes when installed through In House Distribution (HockeyApp is used for that).
Everything works fine without any error via XCode but breaks on line 61:
https://gist.github.com/jonasbark/561e7e66671b041f0107
uniforms[UNIFORM_MVP_MATRIX] = glGetUniformLocation(program, "mvp_matrix");
I really have no idea why. I even tried to hard code the shader files as NSString but no luck. It makes no sense to me why it wouldn't work using In House Distribution...
This is the exception reason:
Exception Type: EXC_BAD_ACCESS (SIGSEGV)
Exception Codes: KERN_INVALID_ADDRESS at 0x00000000
The uniforms field variable is introduced like this:
enum {
UNIFORM_MVP_MATRIX,
UNIFORM_TEXTURE,
NUM_UNIFORMS
};
GLint uniforms[NUM_UNIFORMS];
And just in case anyone would want to see the source code: It's based on http://www.endodigital.com/opengl-es-2-0-on-the-iphone/ --> EDCubeDemo_AppendixA.zip
Okay... finally solved this.
Replaced GLint uniforms[NUM_UNIFORMS]; with GLint uniforms[2]; and it worked. No idea why it would fail on Release versions, must be some kind of compiler optimizations...