Finding what hard drive sectors occupy a file - hard-drive

I'm looking for a nice easy way to find what sectors occupy a given file. My language preference is C#.
From my A-Level Computing class I was taught that a hard drive has a lookup table on the first few KB of the disk. In this table there is a linked list for each file detailing what sectors that file occupies. So I'm hoping there's a convinient way to look in this table for a certain file and see what sectors it occupies.
I have tried Google'ing but I am finding nothing useful. Maybe I'm not searching for the right thing but I can't find anything at all.
Any help is appreciated, thanks.

About Drives
The physical geometry of modern hard drives is no longer directly accessible by the operating system. Early hard drives were simple enough that it was possible to address them according to their physical structure, cylinder-head-sector. Modern drives are much more complex and use systems like zone bit recording , in which not all tracks have the same amount of sectors. It's no longer practical to address them according to their physical geometry.
from the fdisk man page:
If possible, fdisk will obtain the disk geometry automatically. This is not necessarily the physical disk geometry (indeed, modern disks do not really have anything
like a physical geometry, certainly not something that can be described in simplistic Cylinders/Heads/Sectors form)
To get around this problem modern drives are addressed using Logical Block Addressing, which is what the operating system knows about. LBA is an addressing scheme where the entire disk is represented as a linear set of blocks, each block being a uniform amount of bytes (usually 512 or larger).
About Files
In order to understand where a "file" is located on a disk (at the LBA level) you will need to understand what a file is. This is going to be dependent on what file system you are using. In Unix style file systems there is a structure called an inode which describes a file. The inode stores all the attributes a file has and points to the LBA location of the actual data.
Ubuntu Example
Here's an example of finding the LBA location of file data.
First get your file's inode number
$ ls -i
659908 test.txt
Run the file system debugger. "yourPartition" will be something like sda1, it is the partition that your file system is located on.
$sudo debugfs /dev/yourPartition
debugfs: stat <659908>
Inode: 659908 Type: regular Mode: 0644 Flags: 0x80000
Generation: 3039230668 Version: 0x00000000:00000001
...
...
Size of extra inode fields: 28
EXTENTS:
(0): 266301
The number under "EXTENTS", 266301, is the logical block in the file system that your file is located on. If your file is large there will be multiple blocks listed. There's probably an easier way to get that number, I couldn't find one.
To validate that we have the right block use dd to read that block off the disk. To find out your file system block size, use dumpe2fs.
dumpe2fs -h /dev/yourPartition | grep "Block size"
Then put your block size in the ibs= parameter, and the extent logical block in the skip= parameter, and run dd like this:
sudo dd if=/dev/yourPartition of=success.txt ibs=4096 count=1 skip=266301
success.txt should now contain the original file's contents.

sudo hdparm --fibmap file
For ext, vfat and NTFS ..maybe more.
fibmap is also a linux C library.

Related

Trying to understand the load memory address (LMA) and the binary file offset in an ARM binary image

I'm working in an ARM Cortex M4 (STM32F4xxxx) and I'm trying to understand how exactly the binaries (*.elf and *.bin) are built and flashed in memory, specially with regards to the memory locations. Specifically, what I don't understand is how the LMA gets 'translated' from the actual binary file offset. Let me explain with an example:
I have an *.elf file whose (relevant) sections are the following ones:(obtained from objdump -h)
my_file.elf: file format elf32-littlearm
Sections:
Idx Name Size VMA LMA File off Algn
0 .text 000001c4 08010000 08010000 00020000 2**0
CONTENTS, ALLOC, LOAD, READONLY, DATA
1 .bootloader 00004000 08000000 08000000 00010000 2**0
CONTENTS, ALLOC, LOAD, DATA
According to that file, the VMA and LMA are 0x8000000 and 0x8010000, what is perfectly fine since they are defined that way in the linker script file. In addition, according to that report, the offsets of those sections are 0x10000 and 0x20000 respectively. Next, I execute the following command for dumping the memory corresponding to the .bootloader:
xxd -s 0x10000 -l 16 my_file.elf
00010000: b007 c0de b007 c0de b007 c0de b007 c0de ................
Now, create the binary file to be flashed into memory:
arm-none-eabi-objcopy -O binary --gap-fill 0xFF -S my_file.elf my_file.bin
According to the information provided above, and as far as I understand, the generated binary file should have the .bootloader section located at 0x8000000. I understand that this is not how it actually works, inasmuch as the file would get extremely big, so the bootloader is placed at the beginning of the file, so the address 0x0 (check that both memory chunks are identical, even though the are at different addresses):
xxd -s 0x00000 -l 16 my_file.bin
00000000: b007 c0de b007 c0de b007 c0de b007 c0de ................
As far as I understand, when the mentioned binary file is flashed into memory, the bootloader will be at address 0x0, what is perfectly fine taking into account that the MCU in question jumps to the address 0x4 (after getting the SP from 0x0) when it starts working, as I have checked here (page 26): https://www.st.com/content/ccc/resource/technical/document/application_note/76/f9/c8/10/8a/33/4b/f0/DM00115714.pdf/files/DM00115714.pdf/jcr:content/translations/en.DM00115714.pdf
Finally, my questions are:
Will the bootloader actually be placed at 0x0? If so, what's the purpose of defining the memory sectors in the linker file?
Is this because 0x0 belongs to flash memory, and when the MCU starts, all the flash is copied into RAM at address 0x8000000? If so, will the bootloader be executed from flash memory and all the rest of the code from RAM?
Taking into account the above questions, if I have not understood anything, what's the relation/difference between the LMA and the File offset?
No, bootloader will be at 08000000, as defined in elf file.
Image will be burned in flash at that address and executed directly from there (not copied somewhere else or so).
There's somewhat undocumented behaviour, that unitialized area before actual data is skipped when producing binary image. As comment in BFDlib source states (https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;a=blob;f=bfd/binary.c;h=37f5f9f7363e7349612cdfc8bc579369bbabbc0c;hb=HEAD#l238)
/* The lowest section LMA sets the virtual address of the start
of the file. We use this to set the file position of all the
sections. */
Lowest section (.bootloader) LMA is 08000000 in your .elf, so binary file will start at this address.
You should take this address into account and add it to file offset when determining address in the image.
Sections:
Idx Name Size VMA LMA File off Algn
0 .text 000001c4 08010000 08010000 00020000 2**0
/* ^^^^^^^^ */
/* this section will be at offset 10000 in image */
CONTENTS, ALLOC, LOAD, READONLY, DATA
1 .bootloader 00004000 08000000 08000000 00010000 2**0
/* ^^^^^^^^ */
/* this is the lowest LMA in your case it will be used */
/* as a start of an image, and this section will be placed */
/* directly at start of the image */
CONTENTS, ALLOC, LOAD, DATA
Memory layout: Bin. image layout:
000000000 \ skipped
... ________________ /
080000000 .bootloader 0
... ________________
080004000 <gap> 4000
... ________________
080010000 .text 10000
... ________________
0800101C4 101C4
That address defined in ldscript, so binary image should start at fixed location. However you should be aware of this behaviour when dealing with ldscrips and binary images.
To summarize building and flashing process:
When linking, start address is defined in ldscript, and and first section in elf located there.
When converting to binary, start address is determined from LMA and binary image starts from that address.
When flashing image, same address given to flasher as a parameter, so image is placed at the right place (defined in ldscript).
Update: STM32F4xxx booting process.
Address region starting at address 0 is special to those MCUs. It can be configured to map other regions, which are flash, SRAM, or system ROM. They're selected by pins BOOTSELx.
From CPU side it looks like second copy of flash (SRAM or system ROM) appears at address 0.
When CPU starts, it first reads initial SP from adress 0 and initial PC from address 4. Actually, reads from the flash memory are performed.
If the code is linked to run from actual flash location, then initial PC will point there. In this case execution starts at actual flash address.
----- Mapped area (mimics contents as flash) ---
0: (02001000) ;
4: (0800ABCD) ----. ; CPU reads PC here
.... | ; (it points to flash)
----- FLASH ----- |
8000000: 20001000 | ; initial stack pointer
8000004: 0800ABCD --. | ; address of _start in flash
.... | |
800ABCD: <_start:> movw ... <-'<-' ; Code execution starts here
(Note: this does not apply to hex images (like intel hex or s-record) as such formats define loading address explicitly and it is used as is there).
The documentation is pretty clear on where the the address space is for the application code for the stm32's which is 0x08000000 (a competing vendor is like 0x01000000, and so on). And that when booting in a certain mode that 0x08000000 is mapped to address 0x00000000 as can easily be seen with a debugger (in both spaces).
The address space at 0x00000000 mapped to 0x08000000 is smaller than the potential address space at 0x08000000 depending on the chip. So it is wise to build for and use 0x08000000 rather than 0x00000000 but for small programs you can choose either.
Because the cortex-m is a vector table machine when the logic reads address 0x00000004 which is mapped to 0x08000004 in a normal boot mode it sees 0x080xxxxx and then gets out of the 0x00000000 memory space, avoiding any limitations there.
When you use the boot0/boot1 strap pins you can instead cause 0x00000000 to map elsewhere where the burned in bootloader lives. That bootloader of course can easily read 0x08000000 and easily simulate a reset by branching or it can change the logic and actually reset (if you ask it to, although I don't know if that bootloader actually supports running a program). Who knows if we did work there we couldn't necessarily say. Quite possible it always boots into the bootloader and then it changes the mapping depending on the straps.
Similar to an mmu but much simpler decoding addresses and aliasing them is pretty easy. if boot0 == 0 and address[31:16] = 0x0000 then address[31:16]=0x0800 and the memory system decodes it at the different address, as easy as it is to write in C it is that easy in the HDL if not easier.
This is not uncommon to be found in microcontrollers as well as others, but since microcontrollers generally boot from a flash/rom but that same boot space on some architectures is also the vector or exception table that an rtos might want to manipulate sometimes you see that ram can be swapped into that space so the cpu "sees" some ram after a control register is changed where on boot it "saw" the vector table on flash. that or you have the code on flash branch to somewhere in ram for the non-reset vector and then the rtos or any other application that cares to do this can make runtime changes to what code actually gets run for those exceptions or interrupts.
ARM imposes address space rules for where code can execute and data can live and where you might want to start your peripheral address space and what address space is reserved by arm for resources within the core. so you will sometimes see ram have an alias at a lower address implying that if you want to run a program in ram you want to use the lower address for execution but can use either address to copy the code there.
Up to the chip designers as to how simple or complicated to make this. For ST its pretty simple then have one or more boot pins on the package that at least let you choose between your application and the on chip bootloader, so far all the stm32s I have seen the application flash space is considered to live at 0x08000000 and is mapped/aliased to 0x00000000 for one of those boot modes. When there are two boot pins exposed then up to four possible boot conditions can exist of which one is the application with 0x00000000 aliased to 0x08000000.
As to how to get the bits into the flash, that varies widely by tool. The toolchains like gnu certainly will build a .bin file where the first byte of the file is the first byte from the elf that we desire to have at 0x08000000 (if built that way, if you built for 0x02000000 it will still be the first byte, and that code probably won't work). There are tools and you can certainly write your own, that knowing that can load a .bin file at the desired place of 0x08000000 or you can have your too write to address 0x00000000 in the right mode for a program that is not too big and have it still land in the right place to execute on reset. Likewise there are or you can write tools that can parse .elf files, intel hex, motorola srecord and others and based on the information in those binaries have the data be loaded into the address space you desire, assuming everything is bug free.
You might be trying to overcomplicate it. There isn't any magic to it, the tools need to do the sane thing and the sane thing is to take the binary from the compiler and put it in the chip where we want. We are responsible for the linker script and such and bootstrap code/vector table of course, but if we do that right the tools if they are designed right will put the bits in the right place in the chip and if the chip is designed right as documented then it will boot and run.
Will the bootloader actually be placed at 0x0? If so, what's the
purpose of defining the memory sectors in the linker file?
Ideally you want your application or bootloader as you are calling it to be at address 0x08000000 in the processors address space. In certain boot modes (boot0/boot1) that address is also aliased to 0x00000000 so you can see that vector table at both places at the same time. If you are not in the right boot mode then only 0x08000000 will show your code.
Is this because 0x0 belongs to flash memory, and when the MCU starts,
all the flash is copied into RAM at address 0x8000000? If so, will the
bootloader be executed from flash memory and all the rest of the code
from RAM?
The logic in the chip is designed to take the address the processor puts on its address bus and have more than one address land on the application flash, the application flash is not at 0x08000000 if its a 16Kbyte flash for example its only got an address from 0x0000 to 0xFFFF when you access 0x08001234 it actually sends 0x1234 to the flash controller and or the flash controller chops the top off if it knows it is supposed to handle that request. 0x00000000, 0x08000000 are the processors view of the address space, the reality is the upper bits are decoded and route the request to whomever it belongs to and the final handler ultimately looks at the lower bits to determine what is being addressed.
Like when you deliver a letter it has a first and last name, a street address, city state zip. Once it gets to the right post office in the right state then the street address is all that matters to the postal person. Once it gets to the right house, often the first name is all that matters, the rest can be ignored. No difference here. Portions of the address (can often) become don't cares as the responsible logic that inspects that address aims the request at the correct party.
Taking into account the above questions, if I have not understood
anything, what's the relation/difference between the LMA and the File offset?
the elf file format is generic, way overkill for microcontroller work but being well supported and easy to use why not. The load memory address is where we the programmer have desired that code to live with respect to the processors view of the world. From a readelf perspective the offset in the file is the offset for that information in the elf file and it is just wherever the tool put it it has no other interesting relationship. Or at least doesn't need to. Objcopy will rip that data out of the file and for -O binary put it in a sort of memory image file with the lowest address being copied out being offset 0 in that file and the size being determined by the total address space for all of the loadable blocks (unless you use more command line parameters).
And as you sort of implied but if you think about it and have a linker script bug if you were to have even a single instruction at 0x08000000 and a single byte of .data at 0x20000000 but didn't do the AT > thing then your file despite only having three relevant bytes will be 0x20000001 - 0x08000000 bytes long. (after a -O binary) so good idea to not put objcopy in your make file until you have debugged your linker script. Imagine say a target where flash is 0x00000000 and memory is 0xE0000000, pretty big .bin files until you get the linker script sorted out.

Cassandra Data storage: data directory space not equal to the space occupied

This is a beginners question on Cassandra Architecture.
I have a 3 node Cassandra cluster. The data directory is at $CASSANDRA_HOME/data/data. I've loaded a huge data set. I did a nodetool flush and then nodetool tablestats on the table I loaded the data. This says the total space occupied is around 50GiB. I was curious and checked the size of my data directory du $CASSANDRA_HOME/data/data on each of the nodes,which shows around 1-2GB on each. How could the data directory be less than the space occupied by a single table? Am I missing something? My table is created with replication factor 1
du gives out the true storage capacity used by the paths given to it. This is not always directly connected to the size of the data stored in these paths.
Two main factors mix up the output of du compared to any other storage usage information you might get (e. g. from Cassandra).
du might give out a smaller number than expected because of two reasons: ⓐ It combines hard links. This means that if the paths given to it contain hard linked files (I won't explain hard links here, but this term is a fixed one for Unixish operating systems so it can be looked up easily), these are counted only once while the files exist multiple times. ⓑ It is aware of sparse files; these are files which contain large (sometimes huge) areas of empty space (zero-bytes). In many Unixish file systems these can be stored efficiently, depending on how they have been created.
du might give out a larger number than expected because file systems have some overhead. To store a file of n bytes, n + h bytes need to be stored because of this. h depends on the file system and its configuration. The most important factor is that file systems typically store files in a block structure. If a file isn't exactly the size of a multiple of the block size of the file system, the last needed block is still allocated completely by this file, so some of its size if wasted. du will show the whole block as allocated because, in fact, it is.
So in your case Cassandra might talk about space occupied of 50GiB but a lot of it might be empty (never written-to) space. This might be stored in a sparse file on the file system which in fact only uses 2GiB of storage size (which du shows).

eheap allocated in erlang

I use recon_alloc:memory(allocated_types) and get info like below.
34> recon_alloc:memory(allocated_types).
[{binary_alloc,1546650440},
{driver_alloc,21504840},
{eheap_alloc,28704768840},
{ets_alloc,526938952},
{fix_alloc,145359688},
{ll_alloc,403701800},
{sl_alloc,688968},
{std_alloc,67633992},
{temp_alloc,21504840}]
The eheap_alloc is using 28G. But sum up with heap_size of all process
>lists:sum([begin {_, X}=process_info(P, heap_size), X end || P <- processes()]).
683197586
Only 683M !Any idea where is the 28G ?
You are not comparing the right values. From erlang:process_info
{heap_size, Size}
Size is the size in words of youngest heap generation of the
process. This generation currently include the stack of the process.
This information is highly implementation dependent, and may change if
the implementation change.
recon_alloc:memory(allocated_types) is in bytes by default. You can change it using set_unit. It is not the memory that is currently used but it is the memory reserved by the VM grouped into different allocators. You can use recon_alloc:memory(used) instead. More details in allocator() - Recon Library
Searching through the Erlang source code for the eheap_alloc keyword I didn't come up with much. The most relevant piece of code was this XML code from erts_alloc.xml (https://github.com/erlang/otp/blob/172e812c491680fbb175f56f7604d4098cdc9de4/erts/doc/src/erts_alloc.xml#L46):
<tag><c>eheap_alloc</c></tag>
<item>Allocator used for Erlang heap data, such as Erlang process heaps.</item>
This says that process heaps are stored in eheap_alloc but it doesn't say what else is stored in eheap_alloc. The eheap_alloc stores everything your application needs to run along with some extra memory along with some additional space, so the VM doesn't have to request more memory from the OS every time something needs to be added. There are things the VM must keep in memory that aren't associated with a specific process. For example, large binaries, even though they may used within a process, are not stored inside that processes heap. They are stored in a shared process binary heap called binary_alloc. The binary heap, along with the process heaps and some extra memory, are what make up eheap_alloc.
In your case it looks like you have a lot of memory in your binary_alloc. binary_alloc is probably using a significant portion of your eheap_alloc.
For more details on binary handling checkout these pages:
http://blog.bugsense.com/post/74179424069/erlang-binary-garbage-collection-a-love-hate
http://www.erlang.org/doc/efficiency_guide/binaryhandling.html#id65224

Where is memory interleaving and memory split up into ranks happening in Linux kernel?

I am working on a course homework on sysfs virtual file system in Linux Kernel. As part of setting up sysfs virtual file system, Linux kernel organizes the physical memory in to blocks and further into sections in this directoy sys/devices/system/memory. In that directory, memory chunks will be represented as memory0, meomory1, memory2 etc..
After digging the Linux kernel, I have found out that the memory is being split into 128MB blocks and then further into sections of memory and found the code which does this in the C file here: Memory.c. In the above C file, the method memory_dev_init() has the logic for the whole memory block splitting and dividing into sections (or that's what i understood :) ). As per my professor, memory in Linux is split up into ranks and ranks contain interleaved memory addresses as shown below:
rank0: [0-512KB] [2048KB-2560KB] [4096KB-4608KB] ...
rank1: [512KB-1024KB] [2560KB-3072KB] [4608KB-5120KB] ...
rank2: [1024KB-1536KB] [3072KB-3584KB] [5120KB-...
rank3: [1536KB-2048KB] [3584KB-4096KB] ...
As part of my homework, I want to change the rank format into this so that i can get a contiguous memory blocks:
rank0: [0-512KB] [512KB-1024KB] [1024KB-1536KB]...
rank1: [1536KB-2048KB] [2048KB-2560KB] [2560KB-3072KB]...
rank2: [3072KB-3584KB] [3584KB-4096KB] [4096KB-4608KB]...
rank3: [4608KB-5120KB] ...
So I just want to know where exactly this memory interleaving is happening and the existing ranking is happening in the current Linux kernel. Could anyone please point me in the right direction?
I'm not quite sure as I don't see any practical use of the question, it is indeed a sort of academic research... and what you are trying to achieve is achievable by disabling the memory interleaving entirely. I guess after you disable interleaving you will see the proper "picture" in sysfs as well.
In other words -- no coding required, just the change of configuration.
Have a look at the memory interleave settings in BIOS. Here's a post which describe how to do this in a couple of platforms.

Comparing using Map Reduce(Cloudera Hadoop 0.20.2) two text files of size of almost 3GB

I'm trying to do the following in hadoop map/reduce( written in java, linux kernel OS)
Text files 'rules-1' and 'rules-2' (total 3GB in size) contains some rules, each rule are separated by endline character, so the files can be read using readLine() function.
These files 'rules-1' and 'rules-2' needs to be imported as a whole from hdfs in every map function in my cluster i.e. these file are not splittable across different map function.
Input to the mapper's map function is a text file called 'record' (each line is terminated by endline character), so from the 'record' file we get the (key, value) pair. The file is splittable and can be given as input to different map function used in the whole map/reduce process.
What needs to be done is compare each value(i.e. lines from record file) with the rules inside 'rules-1' and 'rules-2'
Problem is, if I pull out each line of rules-1 and rules-2 files to a static arraylist only once, so that each mapper can share the same arraylint and try to compare elements in the arraylist with the each input value from the record file, I get a memory overflow error, since 3GB cannot be stored at a time in the arraylist.
Alternatively, if I import only few lines from the rules-1 and rules-2 files at a time and compare them to each value, map/reduce is taking a lot time to finish its job.
Could you guys provide me any other alternative ideas how can this be done without the memory overflow error? Will it help if I put those file-1 and file-2 inside a hdfs supporting database or something? I'm going out of ideas actually.Would really appreciate if some of you guys could provide me your valuable suggestions.
Iif you input files are small - you can load them into static variables and use rules as an input.
If above is not a case I can suggest the following ways:
a) To give rule-1 and rule-2 high replication factor close to the number of nodes you have. Then you can read from HDFS rule=1 and rule-2 for each record in the input relatively efficient - because it will be sequential read from the local datanode.
b) If you can consider some hash function which, when applied to the rule and to the input string will predict without false negatives that they can match - then you can emit this hash for rules, input record and resolve all possible matches in the reducer. It will be very similar to the way how a join is done using MR
c) I would consider some other optimization techniques like building search trees, or sorting since otherwise the problem looks computationally expensive and will took forever...
On this page find Real-World Cluster Configurations
it will cover file size configuration
You could use the param "mapred.child.java.opts" in conf/mapred-site.xml to increase the memory for your mappers. You might not be able to run as many map slots per server but with more servers in your cluster you could still parallelize your job.
Read the content text file from the MapReduce function and read the keyword text file from the mapper function (for reading your HDFS) and split using StringTokenizer value.toString reading from MapReduce and in your mapper function write HDFS read text file code it will read line-by-line so use two while loops here you compare. Whenever you want data send it to reducer.
Split the 3gb text file into several text files and apply that all text files as usual MapReduce your previous program.
For splitting text file I written Java program and you decide how many lines you want write in each text file.

Resources