Linux Physical memory analysis using hexeditor for Forensics - memory

I am about to forensics investigation of a linux physical memory. I have dumped an ARM Linux which the profile is not listed in Volatility, so I could find the process and the latest command using Hex Editor. here is the Question, how can I create Volatility profile, should I first find the offsets of the network connection, open ports , sockets..... then go for Volatility ? by the Hex Editor I could see some information in Memory Dump, is there anyone who can help me how can I find the rest. and is it necessary to find all the offsets and address space of each information before creating the Volatility profile?

For network related investigation, do file carving for pcap file. Using tshark to extract remnants from pcap file

Related

esp8266 1Mbyte (512kb spiffs) missing files

I have an issue with spiffs and arduino.
I'm using ESP07 with 1mbyte of spi flash memory. I'm using arduino IDE.
I have 16 files in my file system being sketched with the option "tools -> ESP8266 Sketch data upload". If i selected 256kbytes as SPIFFS size all works fine. All files are there and the system works fine.
But if I use 512 kbytes for SPIFFS only 8 files are there after using the same "tools -> ESP8266 Sketch data upload" option.
I have verified my flash spi memory with the demo included in arduino IDE "CheckFlashConfig", it is 1mbyte.
I need to use the 512 kbytes model because the customer can upload a file that can be too big for 256kb spiffs model.
As curious stuff, I selected 2 mbytes (even when memory is 1mbyte), asigning 1.5mb/512kbspiffs and it worked fine (probably because the last bit address was ignored and it worked over 1mbyte really doing it 512/512).
I have the option to upload all those files manually and it will probably work but it is slower than just burn the memory in production.
Is it a SPIFFS bug? a problem with spiffs in arduino o maybe something that i'm missing?
Thanks.
NOTE: I'm using esp8266 community version 2.5.0 package
Since I am not allowed to comment:
Please upgrade to ESP8266 v2.6.3:
The SPIFFS as standard file system has been replaced by LittleFS (means small changes in code if using the DIR opject), but offers improvements in regard to reliability.
For testing choose Generic ESP8266 with this params 1MB (FS:512KB OTA:~246KB)
If the problem (unlikely) persists or you dont nred OTA check the partition schemes in
the following boards.txt
C:\Users\YOURUSERNAME\AppData\Local\Arduino15\packages\esp8266\hardware\esp8266\X.X.X\
dependingon theversion can be 2.5.0 or 2.6.3
- you can define your custom scheme if you like

ghostscript pdf/a conversion problem on ubuntu 18.10 and docker

I am using Ghostscript to convert pdf1.3 to pdf/a-1b using this command:
gs -dPDFA -dBATCH -dNOPAUSE -dNOOUTERSAVE -sColorConversionStrategy=sRGB -sDEVICE=pdfwrite -sOutputFile=output.pdf PDFA_def.ps input.pdf
The PDFA_def.ps is customized to use the srgb icc profile. Except that change it is the standard def file which comes with GS 9.26.
Now comes the tricky part:
1- running this conversion locally on a ubuntu 18.10, GS 9.26 it works fine an i get a valid pdf/a
2- running the same command in a docker container (ubuntu 18.10. GS 9.26) creates a pdf/a as well, which is considered to be valid
However, in the first scenario I can process the file using mustang (https://github.com/ZUGFeRD/mustangproject) to create a valid electronic invoice. In the second scenario (docker container) this fails, since the file is not considered to be valid pdf by mustang.
checking both pdf files I would have expected them to be identical since i am running the same converison on it. However they are not. The PDF create in the dockerfile is 10 bytes smaller and shows some different metainformation in the file itself.
I suspect that there must be some "hidden depdencies" that make GS to act different on my host system compared to a docker container, but it feels entirely wrong and I am running out of means to debug further.
Does anyone know, wether GS has some more depdencies that might cause the same command to produce different results?
The answer is 'maybe'. It depends on how Ghostscript was built for starters.
I assume you are using a package, and not building from source yourself. In which case there are a number of dependencies including; FreeType, LibJPEG, JBIG2dec, LibTIFF, JPEG-XR, LCMS2, LibPNG, OpenJPEG, Expat, zlib, potentially IJS, CUPS and X-Windows, depending on what devices were built in.
Any or all of these could be system shared libraries instead of being built using the specific version shipped by Artifex. They could also be different versions on the two systems.
That said, I think its 'unlikely' that this is your problem. However, without seeing the PDF files I can't tell you why there is a difference. Differences in the metadata are to be expected, since that includes a date/time stamp.
I'd really need to see examples of the original and the two output PDF files to be able to comment further.
[Edit]
Looking at the files they have been produced compressed (unsurprisingly) which can obviously lead to differences in size if there are small differences in the input streams. So the first task was to decompress the files.
That done I see there are, essentially, no differences between them. One of the operating systems is using a time zone 7 hours behind UTC, the other is in UTC so where one of the systems is time stamping with (eg)
2019-04-26T19:49:01Z
The other is using
2019-04-26T12:51:35-07:00
So instead of Z (for UTC) you get -07:00 which is where the extra 10 bytes are coming from. Other than that, the Unique IDs are (as you would imagine) different, the Length values for the streams are different for the streams containing dates, and the startxref is different because the streams are different lengths.
Both files claim to be compatible with PDF/A-1b. In short I can see no real differences between them. Since you are using a tool to further process the files, I'd suggest you try taking the PDF file from your working system and try processing it on the non-working system, and vice versa, it seems to me that the problem may be the later processing rather than the PDF file itself. Perhaps you have different versions of that tool on the two systems.
For what it may be worth, Ghostscript can be induced into creating a ZUGFeRD file directly, see this bug report and this commit to the repository.

What kernel options is Google's Container Optimized OS built with?

I'm having trouble finding the kernel options that Google's Container Optimized OS is built with. I tried looking at the usual locations like boot/config-* and /proc/config.gz, but didn't find anything. I searched the source code and didn't find anything either, but I'm probably just searching wrong.
The specific option I'm curious about is CONFIG_CFS_BANDWIDTH and whether it is enabled or not. Thanks!
You can get it via running zcat /proc/config.gz in a Container-optimized OS VM.
The kernel config is generated from the source here. However, note that the source kernel config are changed during the OS image building process. So they are not 100% the same.

Weka GUI - Not enough memory, won't load?

This same installation of Weka has loaded for me in the past. I am simply trying to load the Weka GUI (double click on the icon) and I get the following error. How can I fix it?
OutOfMemory
Not enough memory. Please load a smaller dataset or use a larger heap size.
- initial JVM size: 122.4 MB
- total memory used: 165.3 MB
- max. memory avail.: 227.6 MB
Note:
The Java heap size can be specified with the -Xmx option.
etc..
I am not loading Weka from the command line, so how can I stop this from occurring?
Just write an answer here for ubuntu users.
If you apt-get install weka, you will have a script installed at /usr/bin/weka
The first a few lines look like below:
#!/bin/bash
. /usr/lib/java-wrappers/java-wrappers.sh
# default options
CLASS="weka.gui.GUIChooser"
MEMORY="256m"
GUI=""
Just modify the line starts with MEMORY so that you have larger upper bound.
MEMORY="2048m"
I'm not sure why you were able to use it before but not now. However, you can specify a larger heap size by changing the RunWeka.ini configuration file. On a Windows machine it should be in the Weka folder of your Program Files directory. You could try a line specifying, for example,
maxheap=200m
There might already be such an option in that file that you can simply change to a larger number.
Here is how to do it on Mac:
right-click on the main Weka file (that opens the Gui) and select "Show Package Contents";
open Info.plist file with any text editor;
change the -Xmx option.
viola

Basics of Jmapping?

I've done some search out there but couldn't find too much really helpful info on it, but could someone try to explain the basic of Java memory maps? Like where/how to use it, it's purpose, and maybe some syntax examples (inputs/outputs types)? I'm taking a Java test soon and this could be one of the topics, but through all of my tutorials Jmap has not come up. Thanks in advance
Edit: I'm referring to the tool: jmap
I would read the man page you have referenced.
jmap prints shared object memory maps or heap memory details of a given process or core file or a remote debug server.
NOTE: This utility is unsupported and may or may not be available in future versions of the JDK. In Windows Systems where dbgeng.dll is not present, 'Debugging Tools For Windows' needs to be installed to have these tools working. Also, PATH environment variable should contain the location of jvm.dll used by the target process or the location from which the Crash Dump file was produced.
http://docs.oracle.com/javase/7/docs/technotes/tools/share/jmap.html
Its not a tool to be played with lightly. You need a good profiler which can read it output as jhat is only useful for trivial programs. (YourKit works just fine for 1+ GB heaps)

Resources