In my first few dummy apps(for practice while learning) I have come across a lot of EXC_BAD_ACCESS, that somehow taught me Bad-Access is : You are touching/Accessing a object that you shouldn't because either it is not allocated yet or deallocated or simply you are not authorized to access it.
Look at this sample code that has bad-access issue because I am trying to modify a const :
-(void)myStartMethod{
NSString *str = #"testing";
const char *charStr = [str UTF8String];
charStr[4] = '\0'; // bad access on this line.
NSLog(#"%s",charStr);
}
While Segmentation fault says : Segmentation fault is a specific kind of error caused by accessing memory that “does not belong to you.” It’s a helper mechanism that keeps you from corrupting the memory and introducing hard-to-debug memory bugs. Whenever you get a segfault you know you are doing something wrong with memory (more description here.
I wanna know two things.
One, Am I right about objective-C's EXC_BAD_ACCESS ? Do I get it right ?
Second, Are EXC_BAD_ACCESS and Segmentation fault same things and Apple has just improvised its name?
No, EXC_BAD_ACCESS is not the same as SIGSEGV.
EXC_BAD_ACCESS is a Mach exception (A combination of Mach and xnu compose the Mac OS X kernel), while SIGSEGV is a POSIX signal. When crashes occur with cause given as EXC_BAD_ACCESS, often the signal is reported in parentheses immediately after: For instance, EXC_BAD_ACCESS(SIGSEGV). However, there is one other POSIX signal that can be seen in conjunction with EXC_BAD_ACCESS: It is SIGBUS, reported as EXC_BAD_ACCESS(SIGBUS).
SIGSEGV is most often seen when reading from/writing to an address that is not at all mapped in the memory map, like the NULL pointer, or attempting to write to a read-only memory location (as in your example above). SIGBUS on the other hand can be seen even for addresses the process has legitimate access to. For instance, SIGBUS can smite a process that dares to load/store from/to an unaligned memory address with instructions that assume an aligned address, or a process that attempts to write to a page for which it has not the privilege level to do so.
Thus EXC_BAD_ACCESS can best be understood as the set of both SIGSEGV and SIGBUS, and refers to all ways of incorrectly accessing memory (whether because said memory does not exist, or does exist but is misaligned, privileged or whatnot), hence its name: exception – bad access.
To feast your eyes, here is the code, within the xnu-1504.15.3 (Mac OS X 10.6.8 build 10K459) kernel source code, file bsd/uxkern/ux_exception.c beginning at line 429, that translates EXC_BAD_ACCESS to either SIGSEGV or SIGBUS.
/*
* ux_exception translates a mach exception, code and subcode to
* a signal and u.u_code. Calls machine_exception (machine dependent)
* to attempt translation first.
*/
static
void ux_exception(
int exception,
mach_exception_code_t code,
mach_exception_subcode_t subcode,
int *ux_signal,
mach_exception_code_t *ux_code)
{
/*
* Try machine-dependent translation first.
*/
if (machine_exception(exception, code, subcode, ux_signal, ux_code))
return;
switch(exception) {
case EXC_BAD_ACCESS:
if (code == KERN_INVALID_ADDRESS)
*ux_signal = SIGSEGV;
else
*ux_signal = SIGBUS;
break;
case EXC_BAD_INSTRUCTION:
*ux_signal = SIGILL;
break;
...
Edit in relation to another of your questions
Please note that exception here does not refer to an exception at the level of the language, of the type one may catch with syntactical sugar like try{} catch{} blocks. Exception here refers to the actions of a CPU on encountering certain types of mistakes in your program (they may or may not be be fatal), like a null-pointer dereference, that require outside intervention.
When this happens, the CPU is said to raise what is commonly called either an exception or an interrupt. This means that the CPU saves what it was doing (the context) and deals with the exceptional situation.
To deal with such an exceptional situation, the CPU does not start executing any "exception-handling" code (catch-blocks or suchlike) in your application. It first gives the OS control, by starting to execute a kernel-provided piece of code called an interrupt service routine. This is a piece of code that figures out what happened to which process, and what to do about it. The OS thus has an opportunity to judge the situation, and take the action it wants.
The action it does for an invalid memory access (such as a null pointer dereference) is to signal the guilty process with EXC_BAD_ACCESS(SIGSEGV). The action it does for a misaligned memory access is to signal the guilty process with EXC_BAD_ACCESS(SIGBUS). There are many other exceptional situations and corresponding actions, not all of which involve signals.
We're now back in the context of your program. If your program receives the SIGSEGV or SIGBUS signals, it will invoke the signal handler that was installed for that signal, or the default one if none was. It is rare for people to install custom handlers for SIGSEGV and SIGBUS and the default handlers shut down your program, so what you usually get is your program being shut down.
This sort of exceptions is therefore completely unlike the sort one throws in try{}-blocks and catch{}es. Those exceptions are handled purely within the application, without involving the OS at all. Here what happens is that a throw statement is simply a glorified jump to the inner-most catch block that handles that exception. As the exception bubbles through the stack, it unwinds the stack behind it, running destructors and suchlike as needed.
Basically yes, indeed an EXC_BAD_ACCESS is usually paired with a SIGSEGV which is a signal that warns about the segmentation failure.
A segmentation failure is risen whenever you are working with a pointer that points to invalid data (maybe not belonging to the process, maybe read-only, maybe an invalid address in general).
Don't think about the segmentation fault in terms of "accessing an object", you are accessing a memory location, so an address. That address must be considered coherent by the OS memory protection system.
Not all errors which are related to accessing invalid data can be tracked by the memory manager, think about a pointer to a stack allocated variable, which is considered valid although its content is not valid anymore upon restoring the stack frame.
Related
I hope this question isn't too open-ended. I ran into a memory issue with Rust, where I got an "out of memory" from calling next on an Iterator trait object. I'm unsure how to debug it. Prints have only brought me to the point where the failure occurs. I'm not very familiar with other tools such as ltrace, so although I could create a trace (231MiB, pff), I didn't really know what to do with it. Is a trace like that useful? Would I do better to grab gdb/lldb? Or Valgrind?
In general I would try to do the following approach:
Boilerplate reduction: Try to narrow down the problem of the OOM, so that you don't have too much additional code around. In other words: the quicker your program crashes, the better. Sometimes it is also possible to rip out a specific piece of code and put it into an extra binary, just for the investigation.
Problem size reduction: Lower the problem from OOM to a simple "too much memory" so that you can actually tell the some part wastes something but that it does not lead to an OOM. If it is too hard to tell wether you see the issue or not, you can lower the memory limit. On Linux, this can be done using ulimit:
ulimit -Sv 500000 # that's 500MB
./path/to/exe --foo
Information gathering: If you problem is small enough, you are ready to collect information which has a lower noise level. There are multiple ways which you can try. Just remember to compile your program with debug symbols. Also it might be an advantage to turn off optimization since this usually leads to information loss. Both can be archived by NOT using the --release flag during compilation.
Heap profiling: One way is too use gperftools:
LD_PRELOAD="/usr/lib/libtcmalloc.so" HEAPPROFILE=/tmp/profile ./path/to/exe --foo
pprof --gv ./path/to/exe /tmp/profile/profile.0100.heap
This shows you a graph which symbolizes which parts of your program eat which amount of memory. See official docs for more details.
rr: Sometimes it's very hard to figure out what is actually happening, especially after you created a profile. Assuming you did a good job in step 2, you can use rr:
rr record ./path/to/exe --foo
rr replay
This will spawn a GDB with superpowers. The difference to a normal debug session is that you can not only continue but also reverse-continue. Basically your program is executed from a recording where you can jump back and forth as you want. This wiki page provides you some additional examples. One thing to point out is that rr only seems to work with GDB.
Good old debugging: Sometimes you get traces and recordings that are still way too large. In that case you can (in combination with the ulimit trick) just use GDB and wait until the program crashes:
gdb --args ./path/to/exe --foo
You now should get a normal debugging session where you can examine what the current state of the program was. GDB can also be launched with coredumps. The general problem with that approach is that you cannot go back in time and you cannot continue with execution. So you only see the current state including all stack frames and variables. Here you could also use LLDB if you want.
(Potential) fix + repeat: After you have a glue what might go wrong you can try to change your code. Then try again. If it's still not working, go back to step 3 and try again.
Valgrind and other tools work fine, and should work out of the box as of Rust 1.32. Earlier versions of Rust require changing the global allocator from jemalloc to the system's allocator so that Valgrind and friends know how to monitor memory allocations.
In this answer, I use the macOS developer tool Instruments, as I'm on macOS, but Valgrind / Massif / Cachegrind work similarly.
Example: An infinite loop
Here's a program that "leaks" memory by pushing 1MiB Strings into a Vec and never freeing it:
use std::{thread, time::Duration};
fn main() {
let mut held_forever = Vec::new();
loop {
held_forever.push("x".repeat(1024 * 1024));
println!("Allocated another");
thread::sleep(Duration::from_secs(3));
}
}
You can see memory growth over time, as well as the exact stack trace that allocated the memory:
Example: Cycles in reference counts
Here's an example of leaking memory by creating an infinite reference cycle:
use std::{cell::RefCell, rc::Rc};
struct Leaked {
data: String,
me: RefCell<Option<Rc<Leaked>>>,
}
fn main() {
let data = "x".repeat(5 * 1024 * 1024);
let leaked = Rc::new(Leaked {
data,
me: RefCell::new(None),
});
let me = leaked.clone();
*leaked.me.borrow_mut() = Some(me);
}
See also:
Why does Valgrind not detect a memory leak in a Rust program using nightly 1.29.0?
Handling memory leak in cyclic graphs using RefCell and Rc
Minimal `Rc` Dependency Cycle
In general, to debug, you can use either a log-based approach (either by inserting the logs yourself, or having a tool such a ltrace, ptrace, ... to generate the logs for you) or you can use a debugger.
Note that ltrace, ptrace or debugger-based approaches require that you be able to reproduce the problem; I tend to favor manual logs because I work in an industry where bug reports are generally too imprecise to allow immediate reproduction (and thus we use logs to create the reproducer scenario).
Rust supports both approaches, and the standard toolset that one uses for C or C++ programs works well for it.
My personal approach is to have some logging in place to quickly narrow down where the issue occurs, and if logging is insufficient to fire up a debugger for a more fine-combed inspection. In this case I would recommend going straight away for the debugger.
A panic is generated, which means that by breaking on the call to the panic hook, you get to see both the call stack and memory state at the moment where things go awry.
Launch your program with the debugger, set a break point on the panic hook, run the program, profit.
I'm using PLCrashReporter in my iOS project and I'm curious, is it possible to use Core Foundation code in my custom crash callback. The thing, that handle my needs is CFPreferences.Here is part of code, that I create:
void LMCrashCallback(siginfo_t* info, ucontext_t* uap, void* context) {
CFStringRef networkStatusOnCrash;
networkStatusOnCrash = (CFStringRef)CFPreferencesCopyAppValue(networkStatusKey, kCFPreferencesCurrentApplication);
CFStringRef additionalInfo = CFStringCreateWithFormat(
NULL, NULL, CFSTR( "Additional Crash Properties:[Internet: %#]", networkStatusOnCrash);
CFPreferencesSetAppValue(additionalInfoKey, additionalInfo,
kCFPreferencesCurrentApplication);
CFPreferencesAppSynchronize(kCFPreferencesCurrentApplication);
}
My target is to collect some system information, just in time when app crashed, e.g Internet connection type.
I know it is not good idea to create own crash callback due to async-safe functions, but this can help.
Also as other option: Is there a way to extend somehow PLCrashReportSystemInfo class?
This is very dangerous. In particular the call to CFStringCreateWithFormat allocates memory. Allocating memory in the middle of a crash handler can lead to battery-draining deadlock (yep; had that bug…) For example, if you were in the middle of free() (which is not an uncommon place to crash), you may already be holding a spinlock on the heap. When you call malloc to get some memory, you may spinlock the heap again and deadlock in a tight-loop. The heap needs to be locked so often and for such short periods of time that it doesn't use a blocking lock. It does the equivalent of while (locked) {}.
You seem to just be reading a preference and copying it to another preference. There's no reason to do that inside a crash handler. Just check hasPendingCrashReport during startup (which I assume you're doing already), and read the key then. It's not clear what networkStatusKey is, but it should still be there when you start up again.
If for any reason it's modified very early (before you call hasPendingCrashReport), you can grab it in main() before launching the app. Or you can grab it in a +load method, which is called even earlier.
I'm new to lldb and trying to diagnose an error by using po [$eax class]
The error shown in the UI is:
Thread 1: EXC_BREAKPOINT (code=EXC_i386_BPT, subcode=0x0)
Here is the lldb console including what I entered and what was returned:
(lldb) po [$eax class]
error: Execution was interrupted, reason: EXC_BAD_ACCESS (code=1, address=0xb06b9940).
The process has been returned to the state before expression evaluation.
The global breakpoint state toggle is off.
You app is getting stopped because the code you are running threw an uncaught Mach exception. Mach exceptions are the equivalent of BSD Signals for the Mach kernel - which makes up the lowest levels of the macOS operating system.
In this case, the particular Mach exception is EXC_BREAKPOINT. EXC_BREAKPOINT is a common source of confusion... Because it has the word "breakpoint" in the name people think that it is a debugger breakpoint. That's not entirely wrong, but the exception is used more generally than that.
EXC_BREAKPOINT is in fact the exception that the lower layers of Mach reports when it executes a certain instruction (a trap instruction). That trap instruction is used by lldb to implement breakpoints, but it is also used as an alternative to assert in various bits of system software. For instance, swift uses this error if you access past the end of an array. It is a way to stop your program right at the point of the error. If you are running outside the debugger, this will lead to a crash. But if you are running in the debugger, then control will be returned to the debugger with this EXC_BREAKPOINT stop reason.
To avoid confusion, lldb will never show you EXC_BREAKPOINT as the stop reason if the trap was one that lldb inserted in the program you are debugging to implement a debugger breakpoint. It will always say breakpoint n.n instead.
So if you see a thread stopped with EXC_BREAKPOINT as its stop reason, that means you've hit some kind of fatal error, usually in some system library used by your program. A backtrace at this point will show you what component is raising that error.
Anyway, then having hit that error, you tried to figure out the class of the value in the eax register by calling the class method on it by running po [$eax class]. Calling that method (which will cause code to get run in the program you are debugging) lead to a crash. That's what the "error" message you cite was telling you.
That's almost surely because $eax doesn't point to a valid ObjC object, so you're just calling a method on some random value, and that's crashing.
Note, if you are debugging a 64 bit program, then $eax is actually the lower 32 bits of the real argument passing register - $rax. The bottom 32 bits of a 64 bit pointer is unlikely to be a valid pointer value, so it is not at all surprising that calling class on it led to a crash.
If you were trying to call class on the first passed argument (self in ObjC methods) on 64 bit Intel, you really wanted to do:
(lldb) po [$rax class]
Note, that was also unlikely to work, since $rax only holds self at the start of the function. Then it gets used as a scratch register. So if you are any ways into the function (which the fact that your code fatally failed some test makes seem likely) $rax would be unlikely to still hold self.
Note also, if this is a 32 bit program, then $eax is not in fact used for argument passing - 32 bit Intel code passes arguments on the stack, not in registers.
Anyway, the first thing to do to figure out what went wrong was to print the backtrace when you get this exception, and see what code was getting run at the time this error occurred.
Clean project and restart Xcode worked for me.
I'm adding my solution, as I've struggled with the same problem and I didn't find this solution anywhere.
In my case I had to run Product -> Clean Build Folder (Clean + Option key) and rebuild my project. Breakpoints and lldb commands started to work properly.
I'm trying to track down an access violation. Reproducibility seems non-deterministic, and rare, so I want to check a few of my assumptions before I go any further.
The access violation is raised in FastMM4, version 4.991, in the function DebugGetMem, in the following code:
if (ASize > (MaximumMediumBlockSize - BlockHeaderSize - FullDebugBlockOverhead))
or CheckFreeBlockUnmodified(Result, GetAvailableSpaceInBlock(Result) + BlockHeaderSize, boGetMem) then
begin
{Set the allocation call stack}
GetStackTrace(#PFullDebugBlockHeader(Result).AllocationStackTrace, StackTraceDepth, 1);
{Set the thread ID of the thread that allocated the block}
PFullDebugBlockHeader(Result).AllocatedByThread := GetThreadID; // ** AV Here
{Block is now in use: It was allocated by this routine}
PFullDebugBlockHeader(Result).AllocatedByRoutine := #DebugGetMem;
The exception is:
Project Workstation.exe raised exception class $C0000005 with message 'access violation at 0x01629099: read of address 0x66aed8f8'.
The call stack is usually the same. It's being called from a paint event on the virtual treeview in which I call Format('%s %s %s', [vid, node, GetName()]) though I doubt that is really relevant (other than that Format allocates dynamic memory).
I'm using FullDebugMode (obviously) and CheckHeapForCorruption options.
I've also established the following:
Turning on CatchUseOfFreedInterfaces doesn't show anything new. I still get the same access violation, and no additional diagnostics.
I once reproduced the crash with FullDebugModeScanMemoryPoolBeforeEveryOperation := True, although I can't remember whether CatchUseOfFreedInterfaces was on or off on this occasion.
It isn't a thread concurrency issue; my application is single-threaded. (Actually, this isn't quite true. I'm using Virtual TreeView, which creates a hidden worker thread, but if this really is the cause then the bug is in Virtual TreeView, not my code, which is rather unlikely.)
Am I right in thinking that, despite CheckHeapForCorruption not catching anything, this exception can only be due to my code corrupting the heap? Is there anything else that could cause FastMM4 to crash in this way?
Any suggestions for further diagnostics, or even making the crash more reproducible?
Strange though it may seem, this is normal behaviour. If you switch to the CPU view you will see that the instruction pointer is located inside the FastMM_FullDebugMode.dll module. Some of the debugging functionality of FastMM can, by design, raise access violations. If you continue execution you will find that your application runs correctly.
It can be quite frustrating that debugging sessions are interrupted in this way. I had some discussion with the FastMM author on a related issue. It does seem that the FastMM debug DLL is designed to work this way the conclusion is that there's not a lot that can be done to stop these external exceptions being raised.
If anyone out there recognises this frustration, and has a good solution, I for one would be eternally grateful.
I'm a member in a team that use Delphi 2007 for a larger application and we suspect heap corruption because sometimes there are strange bugs that have no other explanation.
I believe that the Rangechecking option for the compiler is only for arrays. I want a tool that give an exception or log when there is a write on a memory address that is not allocated by the application.
Regards
EDIT: The error is of type:
Error: Access violation at address 00404E78 in module 'BoatLogisticsAMCAttracsServer.exe'. Read of address FFFFFFDD
EDIT2: Thanks for all suggestions. Unfortunately I think that the solution is deeper than that. We use a patched version of Bold for Delphi as we own the source. Probably there are some errors introduced in the Bold framwork. Yes we have a log with callstacks that are handled by JCL and also trace messages. So a callstack with the exception can lock like this:
20091210 16:02:29 (2356) [EXCEPTION] Raised EBold: Failed to derive ServerSession.mayDropSession: Boolean
OCL expression: not active and not idle and timeout and (ApplicationKernel.allinstances->first.CurrentSession <> self)
Error: Access violation at address 00404E78 in module 'BoatLogisticsAMCAttracsServer.exe'. Read of address FFFFFFDD. At Location BoldSystem.TBoldMember.CalculateDerivedMemberWithExpression (BoldSystem.pas:4016)
Inner Exception Raised EBold: Failed to derive ServerSession.mayDropSession: Boolean
OCL expression: not active and not idle and timeout and (ApplicationKernel.allinstances->first.CurrentSession <> self)
Error: Access violation at address 00404E78 in module 'BoatLogisticsAMCAttracsServer.exe'. Read of address FFFFFFDD. At Location BoldSystem.TBoldMember.CalculateDerivedMemberWithExpression (BoldSystem.pas:4016)
Inner Exception Call Stack:
[00] System.TObject.InheritsFrom (sys\system.pas:9237)
Call Stack:
[00] BoldSystem.TBoldMember.CalculateDerivedMemberWithExpression (BoldSystem.pas:4016)
[01] BoldSystem.TBoldMember.DeriveMember (BoldSystem.pas:3846)
[02] BoldSystem.TBoldMemberDeriver.DoDeriveAndSubscribe (BoldSystem.pas:7491)
[03] BoldDeriver.TBoldAbstractDeriver.DeriveAndSubscribe (BoldDeriver.pas:180)
[04] BoldDeriver.TBoldAbstractDeriver.SetDeriverState (BoldDeriver.pas:262)
[05] BoldDeriver.TBoldAbstractDeriver.Derive (BoldDeriver.pas:117)
[06] BoldDeriver.TBoldAbstractDeriver.EnsureCurrent (BoldDeriver.pas:196)
[07] BoldSystem.TBoldMember.EnsureContentsCurrent (BoldSystem.pas:4245)
[08] BoldSystem.TBoldAttribute.EnsureNotNull (BoldSystem.pas:4813)
[09] BoldAttributes.TBABoolean.GetAsBoolean (BoldAttributes.pas:3069)
[10] BusinessClasses.TLogonSession._GetMayDropSession (code\BusinessClasses.pas:31854)
[11] DMAttracsTimers.TAttracsTimerDataModule.RemoveDanglingLogonSessions (code\DMAttracsTimers.pas:237)
[12] DMAttracsTimers.TAttracsTimerDataModule.UpdateServerTimeOnTimerTrig (code\DMAttracsTimers.pas:482)
[13] DMAttracsTimers.TAttracsTimerDataModule.TimerKernelWork (code\DMAttracsTimers.pas:551)
[14] DMAttracsTimers.TAttracsTimerDataModule.AttracsTimerTimer (code\DMAttracsTimers.pas:600)
[15] ExtCtrls.TTimer.Timer (ExtCtrls.pas:2281)
[16] Classes.StdWndProc (common\Classes.pas:11583)
The inner exception part is the callstack at the moment an exception is reraised.
EDIT3: The theory right now is that the Virtual Memory Table (VMT) is somehow broken. When this happen there is no indication of it. Only when a method is called an exception is raised (ALWAYS on address FFFFFFDD, -35 decimal) but then it is too late. You don't know the real cause for the error. Any hint of how to catch a bug like this is really appreciated!!! We have tried with SafeMM, but the problem is that the memory consumption is too high even when the 3 GB flag is used. So now I try to give a bounty to the SO community :)
EDIT4: One hint is that according the log there is often (or even always) another exception before this. It can be for example optimistic locking in the database. We have tried to raise exceptions by force but in test environment it just works fine.
EDIT5: Story continues... I did a search on the logs for the last 30 days now. The result:
"Read of address FFFFFFDB" 0
"Read of address FFFFFFDC" 24
"Read of address FFFFFFDD" 270
"Read of address FFFFFFDE" 22
"Read of address FFFFFFDF" 7
"Read of address FFFFFFE0" 20
"Read of address FFFFFFE1" 0
So the current theory is that an enum (there is a lots in Bold) overwrite a pointer. I got 5 hits with different address above. It could mean that the enum holds 5 values where the second one is most used. If there is an exception a rollback should occur for the database and Boldobjects should be destroyed. Maybe there is a chance that not everything is destroyed and a enum still can write to an address location. If this is true maybe it is possible to search the code by a regexpr for an enum with 5 values ?
EDIT6: To summarize, no there is no solution to the problem yet. I realize that I may mislead you a bit with the callstack. Yes there are a timer in that but there are other callstacks without a timer. Sorry for that. But there are 2 common factors.
An exception with Read of address FFFFFFxx.
Top of callstack is System.TObject.InheritsFrom (sys\system.pas:9237)
This convince me that VilleK best describe the problem.
I'm also convinced that the problem is somewhere in the Bold framework.
But the BIG question is, how can problems like this be solved ?
It is not enough to have an Assert like VilleK suggest as the damage has already happened and the callstack is gone at that moment. So to describe my view of what may cause the error:
Somewhere a pointer is assigned a bad value 1, but it can be also 0, 2, 3 etc.
An object is assigned to that pointer.
There is method call in the objects baseclass. This cause method TObject.InheritsForm to be called and an exception appear on address FFFFFFDD.
Those 3 events can be together in the code but they may also be used much later. I think this is true for the last method call.
EDIT7: We work closely with the the author of Bold Jan Norden and he recently found a bug in the OCL-evaluator in Bold framework. When this was fixed these kinds of exceptions decreased a lot but they still occasionally come. But it is a big relief that this is almost solved.
You write that you want there to be an exception if
there is a write on a memory address that is not allocated by the application
but that happens anyway, both the hardware and the OS make sure of that.
If you mean you want to check for invalid memory writes in your application's allocated address range, then there is only so much you can do. You should use FastMM4, and use it with its most verbose and paranoid settings in debug mode of your application. This will catch a lot of invalid writes, accesses to already released memory and such, but it can't catch everything. Consider a dangling pointer that points to another writeable memory location (like the middle of a large string or array of float values) - writing to it will succeed, and it will trash other data, but there's no way for the memory manager to catch such access.
I don't have a solution but there are some clues about that particular error message.
System.TObject.InheritsFrom subtracts the constant vmtParent from the Self-pointer (the class) to get pointer to the adress of the parent class.
In Delphi 2007 vmtParent is defined:
vmtParent = -36;
So the error $FFFFFFDD (-35) sounds like the class pointer is 1 in this case.
Here is a test case to reproduce it:
procedure TForm1.FormCreate(Sender: TObject);
var
I : integer;
O : tobject;
begin
I := 1;
O := #I;
O.InheritsFrom(TObject);
end;
I've tried it in Delphi 2010 and get 'Read of address FFFFFFD1' because the vmtParent is different between Delphi versions.
The problem is that this happens deep inside the Bold framework so you may have trouble guarding against it in your application code.
You can try this on your objects that are used in the DMAttracsTimers-code (which I assume is your application code):
Assert(Integer(Obj.ClassType)<>1,'Corrupt vmt');
It sounds like you have memory corruption of object instance data.
The VMT itself isn't getting corrupted, FWIW: the VMT is (normally) stored in the executable and the pages that map to it are read-only. Rather, as VilleK says, it looks like the first field of the instance data in your case got overwritten with a 32-bit integer with value 1. This is easy enough to verify: check the instance data of the object whose method call failed, and verify that the first dword is 00000001.
If it is indeed the VMT pointer in the instance data that is being corrupted, here's how I'd find the code that corrupts it:
Make sure there is an automated way to reproduce the issue that doesn't require user input. The issue may be only reproducible on a single machine without reboots between reproductions owing to how Windows may choose to lay out memory.
Reproduce the issue and note the address of the instance data whose memory is corrupted.
Rerun and check the second reproduction: make sure that the address of the instance data that was corrupted in the second run is the same as the address from the first run.
Now, step into a third run, put a 4-byte data breakpoint on the section of memory indicated by the previous two runs. The point is to break on every modification to this memory. At least one break should be the TObject.InitInstance call which fills in the VMT pointer; there may be others related to instance construction, such as in the memory allocator; and in the worst case, the relevant instance data may have been recycled memory from previous instances. To cut down on the amount of stepping needed, make the data breakpoint log the call stack, but not actually break. By checking the call stacks after the virtual call fails, you should be able to find the bad write.
mghie is right of course. (fastmm4 calls the flag fulldebugmode or something like that).
Note that that works usually with barriers just before and after an heap allocation that are regularly checked (on every heapmgr access?).
This has two consequences:
the place where fastmm detects the error might deviate from the spot where it happens
a total random write (not overflow of existing allocation) might not be detected.
So here are some other things to think about:
enable runtime checking
review all your compiler's warnings.
Try to compile with a different delphi version or FPC. Other compilers/rtls/heapmanagers have different layouts, and that could lead to the error being caught easier.
If that all yields nothing, try to simplify the application till it goes away. Then investigate the most recent commented/ifdefed parts.
The first thing I would do is add MadExcept to your application and get a stack traceback that prints out the exact calling tree, which will give you some idea what is going on here. Instead of a random exception and a binary/hex memory address, you need to see a calling tree, with the values of all parameters and local variables from the stack.
If I suspect memory corruption in a structure that is key to my application, I will often write extra code to make tracking this bug possible.
For example, in memory structures (class or record types) can be arranged to have a Magic1:Word at the beginning and a Magic2:Word at the end of each record in memory. An integrity check function can check the integrity of those structures by looking to see for each record Magic1 and Magic2 have not been changed from what they were set to in the constructor. The Destructor would change Magic1 and Magic2 to other values such as $FFFF.
I also would consider adding trace-logging to my application. Trace logging in delphi applications often starts with me declaring a TraceForm form, with a TMemo on there, and the TraceForm.Trace(msg:String) function starts out as "Memo1.Lines.Add(msg)". As my application matures, the trace logging facilities are the way I watch running applications for overall patterns in their behaviour, and misbehaviour. Then, when a "random" crash or memory corruption with "no explanation" happens, I have a trace log to go back through and see what has lead to this particular case.
Sometimes it is not memory corruption but simple basic errors (I forgot to check if X is assigned, then I go dereference it: X.DoSomething(...) that assumes X is assigned, but it isn't.
I Noticed that a timer is in the stack trace.
I have seen a lot of strange errors where the cause was the timer event is fired after the form i free'ed.
The reason is that a timer event cound be put on the message que, and noge get processed brfor the destruction of other components.
One way around that problem is disabling the timer as the first entry in the destroy of the form. After disabling the time call Application.processMessages, so any timer events is processed before destroying the components.
Another way is checking if the form is destroying in the timerevent. (csDestroying in componentstate).
Can you post the sourcecode of this procedure?
BoldSystem.TBoldMember.CalculateDerivedMemberWithExpression
(BoldSystem.pas:4016)
So we can see what's happening on line 4016.
And also the CPU view of this function?
(just set a breakpoint on line 4016 of this procedure and run. And copy+paste the CPU view contents if you hit the breakpoint). So we can see which CPU instruction is at address 00404E78.
Could there be a problem with re-entrant code?
Try putting some guard code around the TTimer event handler code:
procedure TAttracsTimerDataModule.AttracsTimerTimer(ASender: TObject);
begin
if FInTimer then
begin
// Let us know there is a problem or log it to a file, or something.
// Even throw an exception
OutputDebugString('Timer called re-entrantly!');
Exit; //======>
end;
FInTimer := True;
try
// method contents
finally
FInTimer := False;
end;
end;
N#
I think there is another possibility: the timer is fired to check if there are "Dangling Logon Sessions". Then, a call is done on a TLogonSession object to check if it may be dropped (_GetMayDropSession), right? But what if the object is destroyed already? Maybe due to thread safety issues or just a .Free call and not a FreeAndNil call (so a variable is still <> nil) etc etc. In the mean time, other objects are created so the memory gets reused. If you try to acces the variable some time later, you can/will get random errors...
An example:
procedure TForm11.Button1Click(Sender: TObject);
var
c: TComponent;
i: Integer;
p: pointer;
begin
//create
c := TComponent.Create(nil);
//get size and memory
i := c.InstanceSize;
p := Pointer(c);
//destroy component
c.Free;
//this call will succeed, object is gone, but memory still "valid"
c.InheritsFrom(TObject);
//overwrite memory
FillChar(p, i, 1);
//CRASH!
c.InheritsFrom(TObject);
end;
Access violation at address 004619D9 in module 'Project10.exe'. Read of address 01010101.
Isn't the problem that "_GetMayDropSession" is referencing a freed session variable?
I have seen this kind of errors before, in TMS where objects were freed and referenced in an onchange etc (only in some situations it gave errors, very difficult/impossible to reproduce, is fixed now by TMS :-) ). Also with RemObjects sessions I got something similar (due to bad programming bug by myself).
I would try to add a dummy variable to the session class and check for it's value:
public variable iMagicNumber: integer;
constructor create: iMagicNumber := 1234567;
destructor destroy: iMagicNumber := -1;
"other procedures": assert(iMagicNumber = 1234567)