Windriver VxWorks Simulator Self modifying code - memory

Good morning.
I have a program that is Self-Modifying-Code.
Really, it build the binaries, which then are changed by ELFPatch and changes some function's prologues.
I am working with Windriver WorkBench 3.3 & VxWorks 6.9 Update3.
I created a standard simulator (PENTIUM),
when i run my code on the simulator:
void replace_prolog(void* func_ptr) {
char* p = (char*)func_ptr;
for (int i=0; i < PROLOGUE_SIZE; ++i)
p[i]=m_prologue[i]; // << prologue is a member array.
...
}
Let's call the Real Prologue : Original Prologue;
The Changed Prologue : Changed Prologue;
The One that is placed at Run-Time : Replacement Prologue;
I get an Exception (signal 11 - Segmentation Fault).
!! I realized it is VxWorks's .text Segment Protection.
So, I created a SimPC based VIP to be my simulator BSP, and excluded INCLUDE_PROTECT_TEXT (and all it's relevant kernel components)
and run the simulator:
Now, there is no exception!
Facts
Looking at Memory Browser I see the Changed Prologue Bytes (memory didn't change)!
Printing the buffer to console, prints the Replacement Prologue Bytes values! (Weird)
looking at assembly view (Mega Weird): shows the Changed Prologue Hex values but the Original Prologue asm commands (push bp;...) even though the byte value does not match them.
My Questions
Anyone had any experience with modifying .text segment?
Anyone encountered memory that would not change (without an exception/signal) on simulator, which is not a memory mapped port/volatile ?
Long Shot Assumption
I have an assumption it is about caching, hinting that vxWorks know this region shouldn't change, so it doesn't write_through, but don't know how i can check it...
EDIT 2: tried setting my pointers to be volatile => same behavior!
Please Help.

This may not be the answer but since you are seeing expected output, it confirms that .text section is changed. Only explanation I can think of is if you are using host tools to look at the .text memory then there is a possibility that information may be read from host.
Did you typed commands on target to look at the memory location?

Forgot about the question: but still have an answer.
there is an issue with the Host_Tools which does not show the changes to .text section.
while on the target, the bytes actually changed.
the function didn't work because my transformation was ruining dynamic linking.
my function code, had a call to function with a constant string "Whatever"
when i transformed the function code, i unintentionally, changed the reference of a relocation pointer which at loading time got a bad absolute PTR.
Lucky me, it pointed to a 0x00 buffer, and therefore printed an empty string without crashing.
Suggested Solutions:
Do not touch the relocated Pointers both when altering the Executable and altering at Run-time.
Create a static self-contained executable with absolute footprint => no dynamic relocation occurs that way.
alter dl() to transform the altered reloacted pointers back to their expected relocated.
alter dl() to infer from the altered relocated pointers the expected altered absolute pointer, so the transformation will create absolute pointer.
Note: I Choose #2 because it is the simplest, and because in my system, I do not need shared objects anyway.

Related

Slice referring to out of scope data in zig language

The get function below looks to me like it returns a slice referring to data in an array that will be out of scope once the function returns, and therefore is in error. Assuming this is true, is there any way to detect this at compile time or even run time in a debug mode?
I couldn't find any compiler flags that detected this error at compile time or run time and wondered if I'd missed anything that could help or this is just not something zig can detect at this time, which is fine, I'll just have to be more careful :)
This is a cut down example of a real issue I had which took some time to diagnose to demonstrate the problem
const std = #import("std");
fn get() []u8 {
var data : [100]u8 = undefined;
return data[0..99];
}
pub fn main() !void {
const data = get();
std.debug.print("Name: [{}]\n", .{data});
}
I believe that's behaviour that's not currently frowned upon by the compiler (0.6.0 at the time of writing), based on my understanding of the Lifetime and Ownership part of the docs:
It is the Zig programmer's responsibility to ensure that a pointer is
not accessed when the memory pointed to is no longer available. Note
that a slice is a form of pointer, in that it references other memory.
Although it might be addressed with this issue which describes similar behaviour: https://github.com/ziglang/zig/issues/5725

Cannot resolve enum values by name in Xcode debugger

With an enum typedef'd in a global header file used throughout my project, I am unable to refer to the individual enum values by name while using lldb in Xcode.
For example, if I am stopped at a breakpoint anywhere the enum type is available, and I try to evaluate something at the lldb prompt in Xcode (e.g. (lldb) p (int)EnumConstant), lldb complains:
error: use of undeclared identifier 'EnumConstant'
Furthermore, if I try to set a conditional breakpoint using an enum constant in the condition (e.g. right-click breakpoint in Xcode > Edit Breakpoint... > Condition: EnumConstant == someLocalVar), then Xcode complains every time it tries to evaluate that condition at that breakpoint:
Stopped due to an error evaluating condition of breakpoint 1.1: "EnumConstant == someLocalVar"
Couldn't parse conditional expression:
error: use of undeclared identifier 'EnumConstant'
Xcode's code completion popover even resolves a suggestion for the enum constant when I begin typing the name in the "Edit Breakpoint..." window, so Xcode itself doesn't have a problem resolving it.
Is there an option I can set in lldb or Xcode so that lldb maintains the enum identifiers after compilation? I'm assuming the enum constants get translated to their ordinal value during compilation, causing the executable to discard the identifiers, but thats just my naive speculation.
When I use the equivalent code in a simple GNU C program in Linux or Cygwin (minus the class definitions obviously), but using gcc/gdb instead of Xcode/lldb, I don't have these problems. It is able to resolve the enum values no problem.
I've created a tiny Xcode iPhone project to demonstrate what I mean. Using any of the enum_t constants below within the ViewController.m context (the for-loop is a good place to demo) will produce the same results.
ViewController.h:
#import <UIKit/UIKit.h>
#interface ViewController : UIViewController
typedef enum
{
eZero, eOne, eTwo, eCOUNT
}
enum_t;
extern NSString const * const ENUM_STR[];
#end
ViewController.m:
#import "ViewController.h"
#implementation ViewController
NSString const * const ENUM_STR[eCOUNT] = { #"eZero", #"eOne", #"eTwo" };
- (void)viewDidLoad
{
[super viewDidLoad];
for (enum_t value = eZero; value < eCOUNT; ++value)
{
NSLog(#"%-8# = %d", ENUM_STR[value], value);
}
}
#end
This is a bug (fairly longstanding) in how the name->Debug Information lookup-accelerator tables for enums are built. While enum types are listed, enum values are not. That was surely done to save output debug info size - debug information gets quite big pretty quickly, and so there's a constant tension between the cost of adding more info and the utility of that more info. So far this one hasn't risen to the level of inclusion.
Anyway, doing a search through "all debug information for anything with a name that matches 'eZero'" is prohibitively slow even for decent sized projects, and gets really bad for large ones. So lldb always uses these name->Debug Info tables for its first level access.
Because the accelerator tables do contain the enum type by name (and more important for you typedefs by name as well), the workaround is to do:
(lldb) expr enum_t::eZero
(int) $0 = 0
Of course, if you have truly anonymous enums, then you are pretty much out of luck till this info gets added to the accelerator tables.
BTW, the Xcode symbol completion in the Debugger Console window is done using the Xcode SourceKit indexer, not lldb. So the completions offered from Xcode are not a reflection of lldb's knowledge of the program.
BBTW, gdb doesn't use compiler-made accelerator tables (these were an Apple extension up till the new DWARF 5 standard) but manually builds an index by scanning the debug info. That allows them to index whatever seems best to the debugger. OTOH, it makes debugger startup quite a bit slower for big projects.

Metal functions failing to compile with Xcode 8

Since moving to Xcode 8 and iOS10, my metal based app fails to run at all. On launch I get the error: "Compiler failed with XPC_ERROR_CONNECTION_INTERRUPTED"
This appears two to three times in the console before crashing due to a MTLComputePipelineState not being successfully created and throwing an error when calling the MTLDevice function makeComputePipelineState(function:). The only changes I have made to the project is to update to Swift 3.0, but the console seems to imply a compiler error, which due to the crash I'm assuming is down to some metal code not compiling properly.
Any help would be appreciated, this is ageing me prematurely.
UPDATE:
I've located the line causing the trouble in the .metal file:
int gi1 = permMod12[ii+i1+perm[jj+j1+perm[kk+k1]]];
permMod12 is a static constant array declared as:
static constant int permMod12 [512] = {7,4,5,7...}
perm is similarly static and constant:
static constant int perm [512] = {151,160...}
The variables ii, i1, jj, j1, kk and k1 are all integers calculated in the same kernel.
The kernel is quite large so I'll post a link to the GitHub location. It's the functions called simplex3D and simplex4D that are causing the issue. These are very similar so only focus on one of them, they are carbon copies but 4D has another stretch of variables running (ll, l1, l etc).
The issue certainly seems to be with looking up these arrays with calculated variables as when I change the variables to simple literals there is no error.
The kernel needs to be executed in order to get the error to occur.
Any help with this new info would be great.
I also encountered the same error: "Compiler failed with XPC_ERROR_CONNECTION_INTERRUPTED". The issue was resolved. It stemmed from attempted use of 'threadgroup bool' type variables. Refactoring the code to use 'threadgroup short' variables in place of the boolean resolved the error. (Could not find in the Metal Version 2 specification if bool type is or is not a valid threadgroup type.)
I've encountered this situation, and it seems that there is no unique solution to solve this problem. In my case, the problem was occurred when a texture that uses a normalized coordinate sampler also uses read() function. When I switch read() function to sample() this weird error was removed. I hope your problem were solved already.

Lua: Read Unsigned DWORD not working in Bizhawk Emulator

When I run my code I get an error on this line:
personality = memory.readdwordunsigned(0x02024744)
This is the error message I am given by the console:
LuaInterface.LuaScriptException: [string "main"]:26: attempt to call field 'readdwordunsigned' (a nil value)
I have been doing some testing and researching around this for a while and I cannot get it to work despite this concept being used on several other projects such as this: https://projectpokemon.org/forums/showthread.php?16681-Gen-3-Lua-Scripts
Some other information:
1. I am running the lua script on the BizHawk emulator.
2. if I change the line to memory.readbyte() I receive a different message, which leads me to believe that the console does not recognise memory.readdwordunsigned() as a funciton.
3. The script is in the same folder as the executable file for the emulator.
Thank you in advance for any help
Turns out that support for memory.readdwordunsigned() is no longer supported in the BizHawk Emulator. After extensive research and help from a comment posted on my question I have managed to find a working alternative:
memory.usememorydomain("System Bus")
personality=memory.read_u32_le(0x02024744)
For anyone else who finds this answer useful, please note that a dword is unsigned and 4 bytes in size, hence the use of u32, because a dword is 32bits and unsigned. If you wanted to use a signed byte, for example, you would use s8 instead. le means little endien, be can be used instead for big endien.
It is important to state the memory domain before attempting to read from memory because the memory domain I was using (IWRAM) as well as all other memory domains except for the system bus would produce this error due to the size of the memory address.

How to link to NTQueryKey in Kernel Mode

For the life of me I can't figure out how to resolve the declared NTQueryKey value in my device driver. I looked for a device driver forum, but didn't find one.
Can someone point me to the right place? OSR isn't very responsive with dumb questions like how to link to NTQueryKey.
Here is my prototype:
NTSYSAPI NTSTATUS NTAPI NtQueryKey(HANDLE, KEY_INFORMATION_CLASS, PVOID, ULONG, ULONG *);
and it compiles fine, but the linker doesn't like it.
Thanks
NtXXXX functions should not be called from kernel mode. Use the ZwXXXX functions instead. In your case, you want ZwQueryKey. It has the same signature as NtQueryKey, but it performs actions on the x86 required for talking with kernel mode, and it's provided by ntoskrnl.exe rather than by ntdll.dll.
In kernel mode you link to the Zw.... equivalent functions. See Here. NT.... functions are called from user mode (for example the Win32 subsystem would call the NT... functions).

Resources