C++ using placement new causes segmentation fault in constructor - c++17

please find below code. In this code, when I use "placement new", the I see segmentation fault in constructor, I am not sure why it is causing this, since I can create normal objects. Also, if I use new operator, the code works. Issue is only with placement new. Can someone help me understand this behavior?
#include<iostream>
#include<string>
using namespace std;
template<typename T=int>
class LessThan{
public:
LessThan(T x):mem(x){}
bool operator()(T x){
return x < mem;
}
private:
T mem;
};
int main(){
LessThan a(9);
cout<<"less than 9: "<<a(10)<<endl;
cout<<"less than 9: "<<a(8)<<endl;
LessThan<int>* b = nullptr;// = new LessThan(11);
new(b) LessThan<int>(11);
if(b == nullptr){
cout<<"b is null..."<<endl;
}
cout<<"less than 11: "<<(*b)(12)<<endl;
cout<<"less than 11: "<<(*b)(8)<<endl;
return 0;
}

LessThan<int>* b = nullptr;// = new LessThan(11);
new(b) LessThan<int>(11);
I'm not sure what kind of trick you are trying to achieve when passing a nullptr into new operator, but when using the placement new operator, you are supposed to allocate enough space (and align it properly) to fit in an instance of your class. The pointer should point to the allocated memory, not just an empty pointer of the given class:
char memory[sizeof(LessThan<int>)];
new(memory) LessThan<int>(11);

Related

About extending a Look Up Table at compile time

I'd like to extend my instrumental Profiler in order to avoid it affect too much performances.
Im my current implementation, I'm using a ProfilerHelper taking one string, which is put whereever you want in the profiling f().
The ctor is starting the measurement and the dector is closing it, logging the Delta in an unordered_map entry, which is key is the string.
Now, I'd like to turn all of that into a faster stuff.
First of all, I'd like to create a string LUT (Look Up Table) contaning the f()s names at compile time, and turn the unordered_map to a plain vector which is paired by the string function LUT.
Now the question is: I've managed to create a LUT but std::string_view, but I cannot find a way to extend it at compile time.
A first rought trial sounds like this:
template<unsigned N>
constexpr auto LUT() {
std::array<std::string_view, N> Strs{};
for (unsigned n = 0; n < N; n++) {
Strs[n] = "";
}
return Strs;
};
constexpr std::array<std::string_view, 0> StringsLUT { LUT<0>() };
constexpr auto AddString(std::string_view const& Str)
{
constexpr auto Size = StringsLUT.size();
std::array<std::string_view, Size + 1> Copy{};
for (auto i = 0; i < Size; ++i)
Copy[i] = StringsLUT[i];
Copy[Size] = Str;
return Copy;
};
int main()
{
constexpr auto Strs = AddString(__builtin_FUNCTION());
//for (auto const Str : Strs)
std::cout << Strs[0] << std::endl;
}
So my idea should be to recall the AddString whenever needed in my f()s to be profiled, extending this list at compile time.
But of course I should take the returned Copy and replace the StringsLUT everytime, to land to a final StringsLUT with all the f() names inside it.
Is there a way to do that at compile time?
Sorry, but I'm just entering the magic "new" world of constexpr applied to LUT right in these days.
Tx for your support in advance.

Segmentation Fault on eraseFromParent() LLVM

bool runOnFunction(Function &F) override {
outs() << "Inside Function: "<<F.getName()<<"\n";
int i = 0;
map<int, Instruction*> work;
for(BasicBlock &BB : F)
for(Instruction &I : BB){
if(i == 15)
work.insert({i, &I});
i++;
}
std::map<int, Instruction*>::iterator it = work.begin();
it->second->eraseFromParent();
return true;
}
The above is my code snippet. Here, in the above code, I would like to remove an instruction randomly.. just for the sake of knowing how to use this api. But, It is ending up with segmentation fault!!, no matter what I try. Need some guidance, here please
Inside Function: change_g
While deleting: i32 %
Use still stuck around after Def is destroyed: %add = add nsw i32 <badref>, %l
opt: /home/user/llvm-project/llvm/lib/IR/Value.cpp:103: llvm::Value::~Value(): Assertion `materialized_use_empty() && "Uses remain when a value is destroyed!"' failed.
First of all, it's not a segmentation fault but an assertion which tells you that something went wrong. In particular the message explains that you can not erase an instruction until any of its uses is still present in function.
Usually you'd first create a new instruction, replace all uses of to-be-removed instruction with new result (via Value::replaceAllUsesWith()) and only then erase.

How to find C or C++ code leaks using Instruments (Leaks) - Xcode?

int* foo = new int[10];
foo = NULL;
sleep(60);
Instrument is not finding any leak in above code, how do I use Instrument tool to find C or C++ code leaks. I have stack overflowed most of the explanation is based on objective C codes...
The issue is that compiler will optimize out the call to new in the following code fragment:
int* foo = new int[10];
foo = NULL;
sleep(60);
as it's smart enough to know that it's not being used. If you add code to use foo then compiler won't do this and you should see the leak you are expecting:
int* foo = new int[10];
foo[3] = 23;
foo[8] = 45;
printf("%d %d\n", foo[3], foo[8]);
foo = NULL;
sleep(60);

C++ memory issue

I'm currently building a prime number finder, and am having a memory problem:
This may be due to a corruption of the heap, which indicates a bug in PrimeNumbers.exe or any of the DLLs it has loaded.
PS. Please don't say to me if this isn't the way to find prime numbers, I want to figure it out myself!
Code:
// PrimeNumbers.cpp : main project file.
#include "stdafx.h"
#include <vector>
using namespace System;
using namespace std;
int main(array<System::String ^> ^args)
{
Console::WriteLine(L"Until what number do you want to stop?");
signed const int numtstop = Convert::ToInt16(Console::ReadLine());
bool * isvalid = new bool[numtstop];
int allattempts = numtstop*numtstop; // Find all the possible combinations of numbers
for (int currentnumb = 0; currentnumb <= allattempts; currentnumb++) // For each number try to find a combination
{
for (int i = 0; i <= numtstop; i++)
{
for (int tnumb = 0; tnumb <= numtstop; tnumb++)
{
if (i*tnumb == currentnumb)
{
isvalid[currentnumb] = false;
Console::WriteLine("Error");
}
}
}
}
Console::WriteLine(L"\nAll prime number in the range of:" + Convert::ToString(numtstop));
for (int pnts = 0; pnts <= numtstop; pnts++)
{
if (isvalid[pnts] != false)
{
Console::WriteLine(pnts);
}
}
return 0;
}
I don't see the memory problem.
Please help.
You are allocating numtstop booleans, but you index that array using a variable that ranges from zero to numtstop*numtstop. This will be severely out of bounds for all numstop values greater than 1.
You should either allocate more booleans (numtstop*numtstop) or use a different variable to index into isvalid (for example, i, which ranges from 0 to numstop). I am sorry, I cannot be more precise than that because of your request not to comment on your algorithm of finding primes.
P.S. If you would like to read something on the topic of finding small primes, here is a link to a great book by Dijkstra. He teaches you how to construct a program for the first 1000 primes on pages 35..49.
Problem is that you use native C++ in managed C++/CLI code. And use new without delete of course.
`currentnumb` :
is bigger than the size of the array, which is just numtstop. You are probably going out of bound, this might be your issue.
You never delete[] your isvalid local, this is a memory leak.

XNA 3.1 Preserving the Depth Buffer before it clears

I'm trying to get around XNA 3.1's automatic clearing of the depth buffer when switching render targets by copying the IDirect3DSurface9 from the depth buffer before the render targets are switched, then restore the depth buffer at a later point.
In the code, the getDepthBuffer method is a pointer to the IDirect3DDevice9 GetDepthStencilBuffer function. The pointer to that method seems to be correct, but when I try to get the IDirect3DSurface9 pointer it returns an exception (0x8876086C - D3DERR_INVALIDCALL). The surfacePtr pointer ends up pointing to 0x00000000.
Any idea on why it is not working? And any ideas on how to fix it?
Heres the code:
public static unsafe Texture2D GetDepthStencilBuffer(GraphicsDevice g)
{
if (g.DepthStencilBuffer.Format != DepthFormat.Depth24Stencil8)
{
return null;
}
Texture2D t2d = new Texture2D(g, g.DepthStencilBuffer.Width, g.DepthStencilBuffer.Height, 1, TextureUsage.None, SurfaceFormat.Color);
FieldInfo f = typeof(GraphicsDevice).GetField("pComPtr", BindingFlags.NonPublic | BindingFlags.GetField | BindingFlags.Instance);
object o = f.GetValue(g);
void* devicePtr = Pointer.Unbox(f.GetValue(g));
void* getDepthPtr = AccessVTable(devicePtr, 160);
void* surfacePtr;
var getDepthBuffer = (GetDepthStencilBufferDelegate)Marshal.GetDelegateForFunctionPointer(new IntPtr(getDepthPtr), typeof(GetDepthStencilBufferDelegate));
var rv = getDepthBuffer(&surfacePtr);
SetData(t2d, 0, surfacePtr, g.DepthStencilBuffer.Width, g.DepthStencilBuffer.Height, (uint)(g.DepthStencilBuffer.Width / 4), D3DFORMAT.D24S8);
Marshal.Release(new IntPtr(devicePtr));
Marshal.Release(new IntPtr(getDepthPtr));
Marshal.Release(new IntPtr(surfacePtr));
return t2d;
}
XNA3.1 will not clear your depth-stencil buffer upon changing render targets, it will however resolve it (so that it's unusable for depth tests) if you are not careful with your render target changes.
For example:
SetRenderTarget(someRenderTarget)
DrawStuff()
SetRenderTarget(null)
SetRenderTarget(someOtherRenderTarget)
Will cause the depth-stencil buffer to be resolved, but the following will not:
SetRenderTarget(someRenderTarget)
DrawStuff()
SetRenderTarget(someOtherRenderTarget)
I'm unsure why this happens with XNA3.1 (and earlier versions), but ever since figuring that out I've been able to keep the same depth-stencil buffer alive through many render target changes, even Clear operations as long as the clear specified ClearOptions.Target only.

Resources