How to link to NTQueryKey in Kernel Mode - device-driver

For the life of me I can't figure out how to resolve the declared NTQueryKey value in my device driver. I looked for a device driver forum, but didn't find one.
Can someone point me to the right place? OSR isn't very responsive with dumb questions like how to link to NTQueryKey.
Here is my prototype:
NTSYSAPI NTSTATUS NTAPI NtQueryKey(HANDLE, KEY_INFORMATION_CLASS, PVOID, ULONG, ULONG *);
and it compiles fine, but the linker doesn't like it.
Thanks

NtXXXX functions should not be called from kernel mode. Use the ZwXXXX functions instead. In your case, you want ZwQueryKey. It has the same signature as NtQueryKey, but it performs actions on the x86 required for talking with kernel mode, and it's provided by ntoskrnl.exe rather than by ntdll.dll.

In kernel mode you link to the Zw.... equivalent functions. See Here. NT.... functions are called from user mode (for example the Win32 subsystem would call the NT... functions).

Related

Unresolved external pow10 in C++Builder 64bit

We are migrating code to the Clang-based 64-bit compiler in C++Builder 10.2.3.
The linker is complaining about an unresolved external for pow10(), which is in math.h, but apparently we need a lib that isn't being linked.
Does anyone know which one it is?
AFAICT, it is not linked in. I dumped cw64.a and it does not contain that function.
There is an alternative:
double d = pow10l(2);
That will compile and link fine, and give the correct result, 100.0. The result is supposed to be a long double, but that maps to double in Win64, so that works fine.
FWIW, there is also a function _pow10(), but that is for internal use only. It seems to be a helper function for pow10l() and some other functions.

Lua: Read Unsigned DWORD not working in Bizhawk Emulator

When I run my code I get an error on this line:
personality = memory.readdwordunsigned(0x02024744)
This is the error message I am given by the console:
LuaInterface.LuaScriptException: [string "main"]:26: attempt to call field 'readdwordunsigned' (a nil value)
I have been doing some testing and researching around this for a while and I cannot get it to work despite this concept being used on several other projects such as this: https://projectpokemon.org/forums/showthread.php?16681-Gen-3-Lua-Scripts
Some other information:
1. I am running the lua script on the BizHawk emulator.
2. if I change the line to memory.readbyte() I receive a different message, which leads me to believe that the console does not recognise memory.readdwordunsigned() as a funciton.
3. The script is in the same folder as the executable file for the emulator.
Thank you in advance for any help
Turns out that support for memory.readdwordunsigned() is no longer supported in the BizHawk Emulator. After extensive research and help from a comment posted on my question I have managed to find a working alternative:
memory.usememorydomain("System Bus")
personality=memory.read_u32_le(0x02024744)
For anyone else who finds this answer useful, please note that a dword is unsigned and 4 bytes in size, hence the use of u32, because a dword is 32bits and unsigned. If you wanted to use a signed byte, for example, you would use s8 instead. le means little endien, be can be used instead for big endien.
It is important to state the memory domain before attempting to read from memory because the memory domain I was using (IWRAM) as well as all other memory domains except for the system bus would produce this error due to the size of the memory address.

Windriver VxWorks Simulator Self modifying code

Good morning.
I have a program that is Self-Modifying-Code.
Really, it build the binaries, which then are changed by ELFPatch and changes some function's prologues.
I am working with Windriver WorkBench 3.3 & VxWorks 6.9 Update3.
I created a standard simulator (PENTIUM),
when i run my code on the simulator:
void replace_prolog(void* func_ptr) {
char* p = (char*)func_ptr;
for (int i=0; i < PROLOGUE_SIZE; ++i)
p[i]=m_prologue[i]; // << prologue is a member array.
...
}
Let's call the Real Prologue : Original Prologue;
The Changed Prologue : Changed Prologue;
The One that is placed at Run-Time : Replacement Prologue;
I get an Exception (signal 11 - Segmentation Fault).
!! I realized it is VxWorks's .text Segment Protection.
So, I created a SimPC based VIP to be my simulator BSP, and excluded INCLUDE_PROTECT_TEXT (and all it's relevant kernel components)
and run the simulator:
Now, there is no exception!
Facts
Looking at Memory Browser I see the Changed Prologue Bytes (memory didn't change)!
Printing the buffer to console, prints the Replacement Prologue Bytes values! (Weird)
looking at assembly view (Mega Weird): shows the Changed Prologue Hex values but the Original Prologue asm commands (push bp;...) even though the byte value does not match them.
My Questions
Anyone had any experience with modifying .text segment?
Anyone encountered memory that would not change (without an exception/signal) on simulator, which is not a memory mapped port/volatile ?
Long Shot Assumption
I have an assumption it is about caching, hinting that vxWorks know this region shouldn't change, so it doesn't write_through, but don't know how i can check it...
EDIT 2: tried setting my pointers to be volatile => same behavior!
Please Help.
This may not be the answer but since you are seeing expected output, it confirms that .text section is changed. Only explanation I can think of is if you are using host tools to look at the .text memory then there is a possibility that information may be read from host.
Did you typed commands on target to look at the memory location?
Forgot about the question: but still have an answer.
there is an issue with the Host_Tools which does not show the changes to .text section.
while on the target, the bytes actually changed.
the function didn't work because my transformation was ruining dynamic linking.
my function code, had a call to function with a constant string "Whatever"
when i transformed the function code, i unintentionally, changed the reference of a relocation pointer which at loading time got a bad absolute PTR.
Lucky me, it pointed to a 0x00 buffer, and therefore printed an empty string without crashing.
Suggested Solutions:
Do not touch the relocated Pointers both when altering the Executable and altering at Run-time.
Create a static self-contained executable with absolute footprint => no dynamic relocation occurs that way.
alter dl() to transform the altered reloacted pointers back to their expected relocated.
alter dl() to infer from the altered relocated pointers the expected altered absolute pointer, so the transformation will create absolute pointer.
Note: I Choose #2 because it is the simplest, and because in my system, I do not need shared objects anyway.

Bounds checking of std::vector (and other containers) in clang?

In clang, is there a way to enable bounds checking for [] access to std::vectors and other STL containers, preferably when building in debug mode only?
I just spent hours hunting down a subtle bug that turned out to be caused by us accessing past the end of a std::vector. It doesn't need to do anything clever when it detects the error, just trap in the debugger so that I can find out where it happened and fix it in the code.
Is there a way to do this other than "create your own type that inherits from std::vector", which I'd like to avoid?
(I'm using clang version 3.1 if that makes a difference.)
libstdc++ has a mature debug mode using -D_GLIBCXX_DEBUG.
libc++ also has a debug mode using -D_LIBCPP_DEBUG but as we can see this mailing list discussion: Status of the libc++ debug mode it is incomplete:
| My understanding is that this work was never completed and it's
probably broken/incomplete.
That is correct.
It’s on my list of things to fix/implement, but it’s not something that I will get to anytime soon.
It does seem to work for std::vector on 3.4 and up see it live, give the following program:
#include <vector>
#include <iostream>
int main()
{
std::vector<int> v = {0,1,2,3} ;
std::cout << v[-1] << std::endl ;
}
it generates the following error:
vector[] index out of bounds
Aborted
If you're using Linux or OS X you should look into the address sanitizer:
http://clang.llvm.org/docs/AddressSanitizer.html
It introduces a 2x slowdown, but does a bunch of memory checking and may catch your bug.
Another amazing tool that has saved me countless times is valgrind. If you can run with valgrind it will catch a ton of memory bugs and leaks.
#define _GLIBCXX_DEBUG
This enables all kinds of inline checking (see vector and debug/vector)

iswalpha() in iOS doesn't return the same value on iOS that it does on MacOS

I have a problem about iswalpha() on iOS.
I am tuning my app in Xcode 4.5 and I tried to pass the Spanish character ú to iswalpha(). The xcode displays the int value of ú is 250.
When I tried to run the app on a real device, iswalpha() returns 0; but in the simulator (I run Xcode on a MacBook air with 10.8.2) it returns 1.
I guess the reason might be iOS has a different implementation of wide-character than does MacOS. What is the best way to resolve this?
Enhanced details:
UTF-16(unicode)encoding of Spanish character ú is 250 in int value. I think iswalpha()should return 1, as MACOS does, other than in iOS return a 0.
Dam new user could not post image here. so for UTF-16 encoding of ú please refer to :
http://www.fileformat.info/info/unicode/char/fa/index.htm
Well I can answer my own question now, as well as a development log in case I forgot this later:
It seems to be a fault of Apple's implementation of libc in iOS. The implementation of iswalpha() is incomplete considering letters in languages other than English. The specific letters(ú,á,ó,...) in different languages could not be recognized by iswalpha(), because they fall out of the 0x7F ASCII boundry, and for somehow it could not be recognized by iOS's locale processing functions, but obviously in different locale those should still be readable alphabet letters.
Some details about it:
iswalph() in iOS is tracked down to:
__DARWIN_CTYPE_static_inline int
__istype(__darwin_ct_rune_t _c, unsigned long _f)
{
#ifdef USE_ASCII
return !!(__maskrune(_c, _f));
#else /* USE_ASCII */
return (isascii(_c) ? !!(_DefaultRuneLocale.__runetype[_c] & _f)
: !!__maskrune(_c, _f));
#endif /* USE_ASCII */
}
and it is __maskrune(_c, _f)) that in the end returns 0.
It is understandable that Apple missed this point since, nobody will use iswalpha() in Objective-C. However it may still be useful to note this point for some porting projects. It was a widely used function so maybe important to many legacy projects that porting to iOS. Hope Apple could fix it in later release.
My workaround now to this problem is to have a wrapper function of iswalpha(), which handle these Latin letters by my own code. Now the app runs flawlessly in my iPhone!

Resources