How to extract or find executable code in memory dump? - memory

When I analyze malware, I need disassembly code on malware.
So I try memory forensics to find executable code in memory dump.
I use volatility, Windbg, other forensic tools, ...But I can't find executable code anywhere..:(
Please help me.
How to find executable code using memory dump file???

Finding malware in a memory dump is a very hard process. Malware can be crypted, which will then result in no matches when looking for an executable. This is likely the case.
Without reversing any packing/encryption, you won't find the executable code.
I would recommend watching this series Youtube
about decrypting and reverse engineering programs.

Related

What exactly is an .EBIN file and how can I access its contents?

A friend of mine has a problem. He has hundreds of highly confidential .EBIN files for a medical study created by a person that is no longer available.
I figured that it's probably an Erlang directory - I downloaded Erlang and looked for several file type specifications, but I just can't find a way to "open" this binary file.
I feel really stupid right now as I should be able to easily access this as a long-term programmer, but I'm clueless. I don't even know what to enter into a search engine.
I'd guess that they are just containing serialized Erlang data ("terms"). Try starting Erlang and entering the following from the Erlang shell:
erlang:binary_to_term(element(2,file:read_file("YOURFILE.EBIN"))).
See http://erlang.org/doc/man/erlang.html#term_to_binary-2 for details about the term_to_binary() function and see http://erlang.org/doc/apps/erts/erl_ext_dist.html for details about the term format. If the bytes on disk don't look like this, it's likely that the binary data has also been encrypted before writing it on disk.

Does online compiling optimize readonly variables on nodeMcu ram?

I've done several searches but wasn't able to find some documentation about the online compiler that comes with nodeMcu. I am writing some basic code, but with a lot of "const variables", that is like a #define in C. This variables are read only, and I use just for documentation and quickly change of program at development time.
As I know that RAM is tinny at nodeMcu (esp12 modules), I need to know if compiling files by calling node.compile() does help me with RAM saving by optimize this constants and placing it into some ROM memory.
Thanks!
Yes it does help. The full answer, however, is reading the dedicated chapter in our FAQ at http://nodemcu.readthedocs.io/en/latest/en/lua-developer-faq/#techniques-for-reducing-ram-and-spiffs-footprint (way to long to quote here).

Analyzing code path in Objective C a la TraceGL?

TraceGL is a pretty neat project that allows JS programmers to trace code paths in Javascript. It looks something like this:
I'd like to build something similar for Objective C. I know the runtime has made it rather easy to trace method calls, but how would I trace control flow? For example, in the screenshot above, code paths not executed are made obvious with a red highlight. What would be the best way to achieve something similar in an Objective C/Xcode workflow?
The best I've come up with so far is to write a preprocessor that injects code into temporary source files before sending them to the compiler. Anyone have a better idea?
I guess the visualizer for issues found by Xcode's static analyzer comes pretty close to this - albeit this one will only give you the call path for a particular issue like a memory leak.
Try "Product > Analyze" in Xcode, select any of the issues found on any given project and click on the blue arrow in the code editor to see for yourself.
Not exactly answer for Objective C and XCode.
For C++ code there is a industrial quality code coverage tool BullseyeCoverage
Function coverage gives you a quick overview and condition/decision coverage gives you high precision
Works with everything you can write in C++ and C, including system-level and kernel mode
If you want to invent/write this kind of tool yourself I'd recommend to take a look at (evaluate) some existing tools that solve the same task so that you don't miss a key functionality
There are basically 2 categories of such tools
working at binary level, instrument byte code, library entry points etc.
working at source level, instrument source code before going to the compiler
The purpose of the instrumentation is to insert into the code calls to a profiling runtime that collects the runtime statistics for further processing.
Basic calls
timestamp, thread id, source code address, entering
timestamp, thread id, source code address, leaving
The source code address depends on the granularity you are interesetd in. It can be a function name ot it can be a source file and line number.
Collected performance data can be quite huge so they are usually summed-up and whole callstacks are not captured. It is usually sufficient level of detail for detecting performance bottlenecks.
Another drawback is that capturing detailed performance data especially in code points with many hits will slow the application significantly.
If you want complete history then capture the full trace including timestamps and thread-ids and you will be able to recreate the call stacks later knowing that each enter has corresponding leave.
To guarantee this pairing the code instrumentation must insert exception handling calls to make sure that exit point will be logged even if the function throws an exception (what is the "exception" and how to try-finally it dependes on the language and the OS platform).
To get all necessary tricks and tips evaluate some tools and take a look at their instrumentation style.
BTW: in general it is quite a lot of work to do and to get right I'd personally thought twice or more times about what will be the outcome and what will be the costs.
As a want-to-play-with topic I fully recommend that. I created such a tool for troubleshooting Java MIDP applications working at C++ source level and Java binary level and it was helpful at the time when we needed it.

Binary Serialized File - Delphi

I am trying to deserialize an old file format that was serialized in Delphi, it uses binary seralization. I know nothing about the structure of the file except some very high level records that are in it.
What steps would you take to solve this problem? Any tools etc?
A good hexeditor, and use the gray matter to identify structures.
If you get a hint what kind of file it is, you can search for more specialized tools.
Running the unix/Linux "file" command can be good too (*) See Barry's comment below for how it works. It can be a quick check for common filetypes like DBF,ZIP etc hidden by using a different extension.
(*) there are 3rd party builds for windows, but they might lag in versions. If you can do it on a recent *nix distro, it is advised to do so.
The serialization process simply loops over all published properties and streams their value to a text file. If you do not know the exact classes that were streamed to the file you will have a very hard time deserializing the file. (if not impossible)
A good hex editor is first. If the file is read without buffering (eg read directly from a TFileStream) you could gain some information when using ProcMon from SysInternals; You can see exactly what data is read in what chunks and thus determine more quickly where the boundaries are between the structures you already identified.

Best Free Text Editor Supporting *More Than* 4GB Files? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
I am looking for a text editor that will be able to load a 4+ Gigabyte file into it. Textpad doesn't work. I own a copy of it and have been to its support site, it just doesn't do it. Maybe I need new hardware, but that's a different question. The editor needs to be free OR, if its going to cost me, then no more than $30. For Windows.
glogg could also be considered, for a different usage:
Caveat (reported by Simon Tewsi in the comments, Feb. 2013)
One caveat - has two search functions, Main Search and Quick Find.
The lower one, which I assume is Quick Find, is at least an order of magnitude slower than the upper one, which is fast.
I've had to look at monster(runaway) log files (20+ GB). I used hexedit FREE version which can work with any size files. It is also open source. It is a Windows executable.
Jeff Atwood has a post on this here: http://www.codinghorror.com/blog/archives/000229.html
He eventually went with Edit Pad Pro, because "Based on my prior usage history, I felt that EditPad Pro was the best fit: it's quite fast on large text files, has best-of-breed regex support, and it doesn't pretend to be an IDE."
Instead of loading a gigantic log file in an editor, I'm using Unix command line tools like grep, tail, gawk, etc. to filter the interesting parts into a much smaller file and then, I open that.
On Windows, try Cygwin.
Have you tried context editor? It is small and fast.
I Stumbled on this post many times, as I often need to handle huge files (10 Gigas+).
After being tired of buggy and pretty limited freeware, and not willing to pay fo costly editors after trial expired (not worth the money after all), I just used VIM for Windows with great success and satisfaction.
It is simply PERFECT for this need, fully customizable, with ALL feature one can think of when dealing with text files (searching, replacing, reading, etc. you name it)
I am very surprised nobody answered that (Except a previous answer but for MacOS)...
For the record I stumbled on it on this blog post, which wisely adviced it.
It's really tough to handle a 4G file as such. I used to handle larger text files, but I never used to load them in to my editor. I mostly used UltraEdit in my previous company, now I use Notepad++, but I would get just those parts which i needed to edit. (Most of the cases, the files never needed an edit).
Why do u want to load such a big file in to an editor? When I handled files of these size, I used GNU Core Utils. The most common operations i performed on those files were head ( to get the top 250k lines etc ), tail, split, sort, shuf, uniq etc. It's really powerful.
There's a lot of things you can do with GNU Core Utils. I would definitely recommend those, instead of a new editor.
Sorry to post on such an old thread, but I tried several of the tips here, and none of them worked for me.
It's slightly different than a text editor, but I found that Beyond Compare could handle an extremely large (3.6 Gig) file on my Vista 32-bit machine.
This is a file that that Emacs, Large Text File Viewer, HexEdit, and Notepad++ all choked on.
-Eric
My favourite after trying a few to read a 6GB mysqldump file:
PilotEdit Lite http://www.pilotedit.com/
Because:
Memory usage has (somehow?!) never gone above 25MB, so basically no impact on the rest of my system - though it took several minutes to open.
There was an accurate progress bar during that time so I knew how it was getting on.
Once open, simple searching, and browsing through the file all worked as well as a small notepad file.
It's free.
Others I tried...
EmEditor Pro trial was very impressive, the file opened almost instantly, but unfortunately too expensive for my requirements.
EditPad Pro loaded the whole 6GB file into memory and slowed everything to a crawl.
For windows, unix, or Mac? On the Mac or *nix you can use command line or GUI versions of emacs or vim.
For the Mac: TextWrangler to handle big files well. I'm not versed enough on the Windows landscape to help out there.
f you just want to view a large file rather than edit it, there are a couple of freeware programs that read files a chunk at a time rather than trying to load the entire file in to memory. I use these when I need to read through large ( > 5 GB) files.
Large Text File Viewer by swiftgear http://www.swiftgear.com/ltfviewer/features.html
Big File Viewer by Team Walrus.
You'll have to find the link yourself for that last one because the I can only post a maximum of one hyperlink being a newbie.
When I'm faced with an enormous log file, I don't try to look at the whole thing, I use Free File Splitter
Admittedly this is a workaround rather than a solution, and there are times when you would need the whole file. But often I only need to see a few lines from a larger file and that seems to be your problem too. If not, maybe others would find that utility useful.
A viewer that lets you see enormous text files isn't much help if you are trying to get it loaded into Excel to use the Autofilter, for example. Since we all spend the day breaking down problems into smaller parts to be able to solve them, applying the same principle to a large file didn't strike me as contentious.
HxD -- it's a hexeditor, but it allows in place edits, and doesn't barf on large files.
Tweak is a hex editor which can handle edits to very large files, including inserts and deletes.
EmEditor should handle this. As their site claims:
EmEditor is now able to open even larger than 248 GB (or 2.1 billion lines) by opening a
portion of the file with the new custom bar - Large File Controller.
The Large File Controller allows you to specify the beginning point,
end point, and range of the file to be opened. It also allows you to
stop the opening of the file and monitor the real size of the file and
the size of the temporary disk available.
Not free though..
I found that FAR commander could open large files ( I tried 4.2 GB xml file)
And it does not load the entire file in memory and works fast.
Opened 5GB file (quickly) with:
1) Hex Editor Neo
2) 010 editor
Textpad also works well at opening files that size. I have done it many times when having to deal with extremely large log files in the 3-5gb range. Also, using grep to pull out the worthwhile lines and then look at those works great.
The question would need more details.
Do you want just to look at a file (eg. a log file) or to edit it?
Do you have more memory than the size of the file you want to load or less?
For example, TheGun, a very small text editor written in assembly language, claims to "not have an effective file size limit and the maximum size that can be loaded into it is determined by available memory and loading speed of the file. [...] It has been speed optimised for both file load and save."
To abstract the memory limit, I suppose one can use mapped memory. But then, if you need to edit the file, some clever method should be used, like storing in memory the local changes, and applying them chunk by chunk when saving. Might be ineffective in some cases (big search/replace for example).
I have had problems with TextPad on 4G files too. Notepad++ works nicely.
Emacs can handle huge file sizes and you can use it on Windows or *nix.
What OS and CPU are you using? If you are using a 32-bit OS, then a process on your system physically cannot address more than 4GB of memory. Since most text editors try to load the entire file into memory, I doubt you'll find one that will do what you want. It would have to be a very fancy text editor, that can do out-of-core processing, i. e. load a chunk of the file at a time.
You may be able to load such a huge file with if you use a 64-bit text editor on a computer with a 64-bit CPU and a 64-bit operating system. And you have to make sure that you have enough space in your swap partition or your swap file.
Why do you want to load a 4+ GB file into memory? Even if you find a text editor that can do that, does your machine have 4 GB of memory? And unless it has a lot more than 4 GB in physical memory, your machine will slow down a lot and go swap file crazy.
So why do you want a 4+ GB file? If you want to transform it, or do a search and replace, you may be better off writing a small quick program to do it.
I also like notepad++.

Resources