When I run my code I get an error on this line:
personality = memory.readdwordunsigned(0x02024744)
This is the error message I am given by the console:
LuaInterface.LuaScriptException: [string "main"]:26: attempt to call field 'readdwordunsigned' (a nil value)
I have been doing some testing and researching around this for a while and I cannot get it to work despite this concept being used on several other projects such as this: https://projectpokemon.org/forums/showthread.php?16681-Gen-3-Lua-Scripts
Some other information:
1. I am running the lua script on the BizHawk emulator.
2. if I change the line to memory.readbyte() I receive a different message, which leads me to believe that the console does not recognise memory.readdwordunsigned() as a funciton.
3. The script is in the same folder as the executable file for the emulator.
Thank you in advance for any help
Turns out that support for memory.readdwordunsigned() is no longer supported in the BizHawk Emulator. After extensive research and help from a comment posted on my question I have managed to find a working alternative:
memory.usememorydomain("System Bus")
personality=memory.read_u32_le(0x02024744)
For anyone else who finds this answer useful, please note that a dword is unsigned and 4 bytes in size, hence the use of u32, because a dword is 32bits and unsigned. If you wanted to use a signed byte, for example, you would use s8 instead. le means little endien, be can be used instead for big endien.
It is important to state the memory domain before attempting to read from memory because the memory domain I was using (IWRAM) as well as all other memory domains except for the system bus would produce this error due to the size of the memory address.
Related
Since moving to Xcode 8 and iOS10, my metal based app fails to run at all. On launch I get the error: "Compiler failed with XPC_ERROR_CONNECTION_INTERRUPTED"
This appears two to three times in the console before crashing due to a MTLComputePipelineState not being successfully created and throwing an error when calling the MTLDevice function makeComputePipelineState(function:). The only changes I have made to the project is to update to Swift 3.0, but the console seems to imply a compiler error, which due to the crash I'm assuming is down to some metal code not compiling properly.
Any help would be appreciated, this is ageing me prematurely.
UPDATE:
I've located the line causing the trouble in the .metal file:
int gi1 = permMod12[ii+i1+perm[jj+j1+perm[kk+k1]]];
permMod12 is a static constant array declared as:
static constant int permMod12 [512] = {7,4,5,7...}
perm is similarly static and constant:
static constant int perm [512] = {151,160...}
The variables ii, i1, jj, j1, kk and k1 are all integers calculated in the same kernel.
The kernel is quite large so I'll post a link to the GitHub location. It's the functions called simplex3D and simplex4D that are causing the issue. These are very similar so only focus on one of them, they are carbon copies but 4D has another stretch of variables running (ll, l1, l etc).
The issue certainly seems to be with looking up these arrays with calculated variables as when I change the variables to simple literals there is no error.
The kernel needs to be executed in order to get the error to occur.
Any help with this new info would be great.
I also encountered the same error: "Compiler failed with XPC_ERROR_CONNECTION_INTERRUPTED". The issue was resolved. It stemmed from attempted use of 'threadgroup bool' type variables. Refactoring the code to use 'threadgroup short' variables in place of the boolean resolved the error. (Could not find in the Metal Version 2 specification if bool type is or is not a valid threadgroup type.)
I've encountered this situation, and it seems that there is no unique solution to solve this problem. In my case, the problem was occurred when a texture that uses a normalized coordinate sampler also uses read() function. When I switch read() function to sample() this weird error was removed. I hope your problem were solved already.
Good morning.
I have a program that is Self-Modifying-Code.
Really, it build the binaries, which then are changed by ELFPatch and changes some function's prologues.
I am working with Windriver WorkBench 3.3 & VxWorks 6.9 Update3.
I created a standard simulator (PENTIUM),
when i run my code on the simulator:
void replace_prolog(void* func_ptr) {
char* p = (char*)func_ptr;
for (int i=0; i < PROLOGUE_SIZE; ++i)
p[i]=m_prologue[i]; // << prologue is a member array.
...
}
Let's call the Real Prologue : Original Prologue;
The Changed Prologue : Changed Prologue;
The One that is placed at Run-Time : Replacement Prologue;
I get an Exception (signal 11 - Segmentation Fault).
!! I realized it is VxWorks's .text Segment Protection.
So, I created a SimPC based VIP to be my simulator BSP, and excluded INCLUDE_PROTECT_TEXT (and all it's relevant kernel components)
and run the simulator:
Now, there is no exception!
Facts
Looking at Memory Browser I see the Changed Prologue Bytes (memory didn't change)!
Printing the buffer to console, prints the Replacement Prologue Bytes values! (Weird)
looking at assembly view (Mega Weird): shows the Changed Prologue Hex values but the Original Prologue asm commands (push bp;...) even though the byte value does not match them.
My Questions
Anyone had any experience with modifying .text segment?
Anyone encountered memory that would not change (without an exception/signal) on simulator, which is not a memory mapped port/volatile ?
Long Shot Assumption
I have an assumption it is about caching, hinting that vxWorks know this region shouldn't change, so it doesn't write_through, but don't know how i can check it...
EDIT 2: tried setting my pointers to be volatile => same behavior!
Please Help.
This may not be the answer but since you are seeing expected output, it confirms that .text section is changed. Only explanation I can think of is if you are using host tools to look at the .text memory then there is a possibility that information may be read from host.
Did you typed commands on target to look at the memory location?
Forgot about the question: but still have an answer.
there is an issue with the Host_Tools which does not show the changes to .text section.
while on the target, the bytes actually changed.
the function didn't work because my transformation was ruining dynamic linking.
my function code, had a call to function with a constant string "Whatever"
when i transformed the function code, i unintentionally, changed the reference of a relocation pointer which at loading time got a bad absolute PTR.
Lucky me, it pointed to a 0x00 buffer, and therefore printed an empty string without crashing.
Suggested Solutions:
Do not touch the relocated Pointers both when altering the Executable and altering at Run-time.
Create a static self-contained executable with absolute footprint => no dynamic relocation occurs that way.
alter dl() to transform the altered reloacted pointers back to their expected relocated.
alter dl() to infer from the altered relocated pointers the expected altered absolute pointer, so the transformation will create absolute pointer.
Note: I Choose #2 because it is the simplest, and because in my system, I do not need shared objects anyway.
In clang, is there a way to enable bounds checking for [] access to std::vectors and other STL containers, preferably when building in debug mode only?
I just spent hours hunting down a subtle bug that turned out to be caused by us accessing past the end of a std::vector. It doesn't need to do anything clever when it detects the error, just trap in the debugger so that I can find out where it happened and fix it in the code.
Is there a way to do this other than "create your own type that inherits from std::vector", which I'd like to avoid?
(I'm using clang version 3.1 if that makes a difference.)
libstdc++ has a mature debug mode using -D_GLIBCXX_DEBUG.
libc++ also has a debug mode using -D_LIBCPP_DEBUG but as we can see this mailing list discussion: Status of the libc++ debug mode it is incomplete:
| My understanding is that this work was never completed and it's
probably broken/incomplete.
That is correct.
It’s on my list of things to fix/implement, but it’s not something that I will get to anytime soon.
It does seem to work for std::vector on 3.4 and up see it live, give the following program:
#include <vector>
#include <iostream>
int main()
{
std::vector<int> v = {0,1,2,3} ;
std::cout << v[-1] << std::endl ;
}
it generates the following error:
vector[] index out of bounds
Aborted
If you're using Linux or OS X you should look into the address sanitizer:
http://clang.llvm.org/docs/AddressSanitizer.html
It introduces a 2x slowdown, but does a bunch of memory checking and may catch your bug.
Another amazing tool that has saved me countless times is valgrind. If you can run with valgrind it will catch a ton of memory bugs and leaks.
#define _GLIBCXX_DEBUG
This enables all kinds of inline checking (see vector and debug/vector)
I have a problem about iswalpha() on iOS.
I am tuning my app in Xcode 4.5 and I tried to pass the Spanish character ú to iswalpha(). The xcode displays the int value of ú is 250.
When I tried to run the app on a real device, iswalpha() returns 0; but in the simulator (I run Xcode on a MacBook air with 10.8.2) it returns 1.
I guess the reason might be iOS has a different implementation of wide-character than does MacOS. What is the best way to resolve this?
Enhanced details:
UTF-16(unicode)encoding of Spanish character ú is 250 in int value. I think iswalpha()should return 1, as MACOS does, other than in iOS return a 0.
Dam new user could not post image here. so for UTF-16 encoding of ú please refer to :
http://www.fileformat.info/info/unicode/char/fa/index.htm
Well I can answer my own question now, as well as a development log in case I forgot this later:
It seems to be a fault of Apple's implementation of libc in iOS. The implementation of iswalpha() is incomplete considering letters in languages other than English. The specific letters(ú,á,ó,...) in different languages could not be recognized by iswalpha(), because they fall out of the 0x7F ASCII boundry, and for somehow it could not be recognized by iOS's locale processing functions, but obviously in different locale those should still be readable alphabet letters.
Some details about it:
iswalph() in iOS is tracked down to:
__DARWIN_CTYPE_static_inline int
__istype(__darwin_ct_rune_t _c, unsigned long _f)
{
#ifdef USE_ASCII
return !!(__maskrune(_c, _f));
#else /* USE_ASCII */
return (isascii(_c) ? !!(_DefaultRuneLocale.__runetype[_c] & _f)
: !!__maskrune(_c, _f));
#endif /* USE_ASCII */
}
and it is __maskrune(_c, _f)) that in the end returns 0.
It is understandable that Apple missed this point since, nobody will use iswalpha() in Objective-C. However it may still be useful to note this point for some porting projects. It was a widely used function so maybe important to many legacy projects that porting to iOS. Hope Apple could fix it in later release.
My workaround now to this problem is to have a wrapper function of iswalpha(), which handle these Latin letters by my own code. Now the app runs flawlessly in my iPhone!
For the life of me I can't figure out how to resolve the declared NTQueryKey value in my device driver. I looked for a device driver forum, but didn't find one.
Can someone point me to the right place? OSR isn't very responsive with dumb questions like how to link to NTQueryKey.
Here is my prototype:
NTSYSAPI NTSTATUS NTAPI NtQueryKey(HANDLE, KEY_INFORMATION_CLASS, PVOID, ULONG, ULONG *);
and it compiles fine, but the linker doesn't like it.
Thanks
NtXXXX functions should not be called from kernel mode. Use the ZwXXXX functions instead. In your case, you want ZwQueryKey. It has the same signature as NtQueryKey, but it performs actions on the x86 required for talking with kernel mode, and it's provided by ntoskrnl.exe rather than by ntdll.dll.
In kernel mode you link to the Zw.... equivalent functions. See Here. NT.... functions are called from user mode (for example the Win32 subsystem would call the NT... functions).