Assembly - Why there are changing values in a static analized code? - memory

I would like to extend a program's code and I want to access a specific data there. My issue is that the address of that specific data keeps changing every system restart. For example the data currently defined like this .data:00B374F1 byte_B374F1 next time after system restart it is going to be different so my new code won't work then.
I understand that the address is changing because of some pre memory allocation, but isn't that only related to dynamic analization? I would really appreciate if someone could help me out, I am new at assembly.

Related

How do I save the memory state of a C program to jumpstart later

In a large complex C program, I'd like to save to a file the contents of all memory that is used by static variables, global structures and dynamically allocated variables. Those memory variables are more than 10,000.
The C program has only single thread, no file operation and program itself is not so complex (calculation is complex).
Then, in a same execution of the program, I want to initialize the memory from this saved state.
If this is even possible, can someone offer an approach to accomplish this?
You have to define a Struct to keep al your data in and then you have to implement a function to save it into a file.
Something like this: Saving struct to file
Please note, however, that this method is the simplest, but comes with no portability at all.
Edit after Comment: basically, what you would like to do is save whatever is happening in the program and then restart it after a load. I don't think this is possible in any simple way. You MUST understand what "status of your application" means.
Think about it: doing a dump of the memory saves not only the data, but also the current Instruction Pointer. So, with that "dumb" dump, you would have also saved the actual instruction currently running. And many more complications you really don't want to care about.
The closest thing you are thinking about is running the program in a Virtual Machine. If you pause the VM the execution status will be "saved", but whenever you restart the VM, the program will restart at the exact same execution point you paused it.
If the configurations are scattered through the application, still you can access a global struct used to save everything.
But still you have to know your program and identify what you have to save. No shortcuts on that.

Understanding file mapping

I try to understand mmap and got the following link to read:
http://duartes.org/gustavo/blog/post/page-cache-the-affair-between-memory-and-files
I understand the text in general and it makes sense to me. But at the end is a paragraph, which I don't really understand or it doesn't fit to my understanding.
The read-only page table entries shown above do not mean the mapping is read only, they’re merely a kernel trick to share physical memory until the last possible moment. You can see how ‘private’ is a bit of a misnomer until you remember it only applies to updates. A consequence of this design is that a virtual page that maps a file privately sees changes done to the file by other programs as long as the page has only been read from. Once copy-on-write is done, changes by others are no longer seen. This behavior is not guaranteed by the kernel, but it’s what you get in x86 and makes sense from an API perspective. By contrast, a shared mapping is simply mapped onto the page cache and that’s it. Updates are visible to other processes and end up in the disk. Finally, if the mapping above were read-only, page faults would trigger a segmentation fault instead of copy on write.
The folloing to lines doesn't match for me. I see no sense.
A consequence of this design is that a virtual page that maps a file privately sees changes done to the file by other programs as long as the page has only been read from.
It is private. So it can't see changes by others!
Finally, if the mapping above were read-only, page faults would trigger a segmentation fault instead of copy on write.
Don't know what the author means with this. Is their a flag "MAP_READ_ONLY"? Until a write occurs, every pointer from the programs virtual-pages to the page-table-entries in the page-cache is read-only.
Can you help me to understand this two lines?
Thanks
Update
It seems it got it, with some help.
A consequence of this design is that a virtual page that maps a file privately sees changes done to the file by other programs as long as the page has only been read from.
Although a mapping is private, the virtual page really can see the changes by others, until it modifiy itselfs a page. The modification becomes is private and is only visible to the virtual page of the writing program.
Finally, if the mapping above were read-only, page faults would trigger a segmentation fault instead of copy on write.
I'm told that pages itself can also have permissions (read/write/execute).
Tell me if I'm wrong.
This fragment:
A consequence of this design is that a virtual page that maps a file privately sees changes done to the file by other programs as long as the page has only been read from.
is telling you that the kernel cheats a little bit in the name of optimization. Even though you've asked for a private mapping, the kernel will actually give you a shared one at first. Then, if you write the page, it becomes private.
Observe that this "cheating" doesn't matter (doesn't make any difference) if all processes which are accessing the file are doing it with MAP_PRIVATE, because no actual changes to the file will ever occur in that case. Different processes' mappings will simply be upgraded from "fake cheating MAP_PRIVATE" to true "MAP_PRIVATE" at different times according to whenever each process first writes to the file. This is probably a common scenario. It's only if the file is being concurrently updated by other means (MAP_SHARED with PROT_WRITE or else regular, non-mmap I/O operations) that it makes a difference.
I'm told that pages itself can also have permissions (read/write/execute).
Sure, they can. You have to ask for the permissions you want when you initially map the file, in fact: the third argument to mmap, which will be a combination of PROT_READ, PROT_WRITE, PROT_EXEC, and PROT_NONE.

WCF Windows Service not releasing resources/memory after every call

I have created WCF application which is running on Windows Service. It was installed using Windows Installer. I have followed procedure mentioned in following article for same.
http://msdn.microsoft.com/en-us/library/bb332338.aspx#msdnwcfhc_topic4
Most WCF properties are kept as default for net.tcp protocol, per call instance and so on.
Memory consumption of service keeps on increasing after every call and does not decrease. At the end it throws OutOfMemory consumption.
Application returns very heavy string based data. With memory-profiler I found memory is still allocated to string objects and increases during call.
As per my understanding string is managed objects should release data once out of scope.
Let me know if any other configuration/coding information is needed specifically.
There must be something keeping references to those strings in the code. Can you use your profiler to trace the references that are keeping the string objects alive?
After many unsuccessful attempts to handle LOH (http://msdn.microsoft.com/en-us/magazine/cc534993.aspx) which was very large string in my context, I have created custom class to handle it.
Instead of storing large string in single object am storing it as collection of small strings in custom object. .NET disposed it properly without creating above mentioned problem.
Other possible solution with worked for me is to use file object to store large file and access it as text reader. This works well and keeps footprint of application small. Unfortunately for me it did not work and accessing file location was not allowed for application.
It would be very difficult to answer this question without some code to look at. You can always call GC.Collect(GC.MaxGeneration) to force garbage collection and see if this doesn't reduce your memory consumption. Ideally this would only be temporary code to track down what is going on in the application. If forcing garbage collection does not reduce memory consumption then references to the strings must be being retained, via static member variables or whatever: having no conception of what the code is, any theory would be a shot in the dark

How to log user activity with time spent and application name using c#.net 2.0?

I am creating one desktop application in which I want to track user activity on the system like opened Microsoft Excel with file name and worked for ... much of time on that..
I want to create on xml file to maintain that log.
Please provide me help on that.
This feels like one of those questions where you have to figure out what is meant by the question itself. Taken at face value, it sounds like you want to monitor how long a user spends in any process running in their session, however it may be that you only really want to know if, and for how long a user spends time in a specific subset of all running processes.
Since I'm not sure which of these is the correct assumption to make, I will address both as best I can.
Regardless of whether you are monitoring one or all processes, you need to know what processes are running when you start up, and you need to be notified when a new process is created. The first of these requirements can be met using the GetProcesses() method of the System.Diagnostics.Process class, the second is a tad more tricky.
One option for checking whether new processes exist is to call GetProcesses after a specified interval (polling) and determine whether the list of processes has changed. While you can do this, it may be very expensive in terms of system resources, especially if done too frequently.
Another option is to look for some mechanism that allows you to register to be notified of the creation of a new process asynchronously, I don't believe such a thing exists within the .NET Framework 2.0 but is likely to exist as part of the Win32 API, unfortunately I cant give you a specific function name because I don't know what it is.
Finally, however you do it, I recommend being as specific as you can about the notifications you choose to subscribe for, the less of them there are, the less resources are used generating and processing them.
Once you know what processes are running and which you are interested in you will need to determine when focus changes to a new process of interest so that you can time how long the user spends actually using the application, for this you can use the GetForegroundWindow function to get the window handle of the currently focused window.
As far as longing to an XML file, you can either use an external library such as long4net as suggested by pranay's answer, or you can build the log file using the XmlTextWriter or XmlDocument classes in the System.Xml namespace

possible to have two working sets? 1) data 2) code

In regards to Operating System concepts... Can a process to have two working sets, one that represents data and another that represents code?
A "Working Set" is a term associated with Virtual Memory Manangement in Operating systems, however it is an abstract idea.
A working set is just the concept that there is a set of virtual memory pages that the application is currently working with and that there are other pages it isn't working with. Any page that is being currently used by the application is by definition part of the 'Working Set', so its impossible to have two.
Operating systems often do distinguish between code and data in a process using various page permissions and memory protection but this is a different concept than a "Working set".
This depends on the OS.
But on common OSes like Windows there is no real difference between data and code so no, it can't split up it's working set in data and code.
As you know, the working set is the set of pages that a process needs to have in primary store to avoid thrashing. If some of these are code, and others data, it doesn't matter - the point is that the process needs regular access to these pages.
If you want to subdivide the working set into code and data and possibly other categorizations, to try to model what pages make up the working set, that's fine, but the working set as a whole is still all the pages needed, regardless of how these pages are classified.
EDIT: Blocking on I/O - does tis affect the working set?
Remember that the working set is a model of the pages used over a given time period. When the length of time the process is blocked is short compared to the time period being modelled, then it changes little - the wait is insignificant and the working set over the time period being considered is unaffected.
But when the I/O wait is long compared to the modelled preriod, then it changes much. During the period the process is blocked, it's working set is emmpty. An OS could theoretically swap out all the processes' pages on the basis of this.
The working set model attempts to predict what pages the process will need based on it's past behaviour. In this case, if the process is still blocked at time t+1, then the model of an empty working set is correct, but as soon as the process is unblocked, it's working set will be non-empty - the prediction by the model still says no pages needed, so the predictive power of the model breaks down. But this is to be expected - you can't really predict the future. Normally. And the working set is expected to change over time.
This question is from the book "operating system concepts". The answer they are looking for (found elsewhere on the web) is:
Yes, in fact many processors provide two TLBs for this very reason. As
an example, the code being accessed by a process may retain the same
working set for a long period of time. However, the data the code
accesses may change, thus reflecting a change in the working set for
data accesses.
Which seems reasonable but is completely at odds with some of the other answers...

Resources