Like the title says I mixed up which disk was which and ended up using MacOS's "Diskutil" to first delete the partition then reformat it as "exFAT"
I did some basic checks and can't find the LUKS header at the beginning of the partition (presumably because either the delete or the reformat wrote over the LUKS header)
My research indicates that "LUKS2" (as opposed to LUKS version 1) wisely clones the LUKS header to the end of the device (as a backup).
Since my LUKS partition was created recently (within 2 months) on Ubuntu 22.04, I am wondering if anyone can confirm:
Is it possible that my partition was LUKS2 rather than LUKS1? I am pretty sure I would have used the default cryptsetup command (not specifying a specific version of LUKS) so the question would be: "does the default cryptsetup command create a LUKS2 version by any chance?"
What is the correct way to check for a LUKS header backup at the end of the device?
i.e. how to modify the below command to search backwards starting at the end of the device?
sudo hexdump -C /dev/sdb3 | grep LUKS
Is there any other way known to reverse what "diskutil" might have done to recover the header?
Related
I'm wondering if it's possible to add timestamps to the journal file?
It appears that a date & time are recorded when SPSS is started, but if you have the program open for longer periods of time (i.e. days) it doesn't break it up if the program isn't closed.
Having timestamps would make it much easier to find what I'm looking for the times I look back to find things.
This is what I use to insert timestamps into my output:
HOST COMMAND=['echo %time%'].
However the journal file only shows the syntax.
The journal file is kept flushed and closed by Statistics, so you can probably write to it from another process. I don't think the suggestion above will work, because it will write the code but not the output to the journal. However, using Python you could do something like this.
begin program.
import time
open(r"full path to your journal file", "a").write("* " + time.asctime() + "\n")
end program
I can't see why it shouldn't work, unless you are not using a windows operating system.
On Unix-like system like Linux or Mac which run the bash (shell) you would rather use
HOST COMMAND =['date'].
If you have the Python extension installed you could also use Python code to to print the date and time (which would be a platform independent solution).
BEGIN PROGRAM.
import time
print time.ctime()
END PROGRAM.
I'm trying to reduce my app store binary size and we have lots of external libs that might be contributing to the size of the final ipa. Is there any way to find out how much each external static lib takes up in the final binary (Other than going about removing each one ?) ?
All of this information is contained in the link map, if you have the patience for sifting through it (for large apps, it can be quite large). The link map has a listing of all the libraries, their object files, and all symbols that were packaged into your app, all in human-readable text. Normally, projects aren't configured to generate them by default, so you'll have to make a quick project file change.
From within Xcode:
Under 'Build Settings' for your target, search for "map"
In the results below, under the 'Linking' section, set 'Write Link Map File' to "Yes"
Make sure to make note of the full path and file name listed under 'Path to Link Map File'
The next time you build your app you'll get a link map dumped to that file path. Note that the path is relative to your app's location in the DerivedData folder (usually ~/Library/Developer/Xcode/DerivedData/<your-app-name>-<random-string-of-letters-and-numbers>/Build/Intermediates/..., but YMMV). Since it's just a text file, you can read it with any text editor.
The contents of the link map are divided into 3 sections, of which 2 will be relevant to what you're looking for:
Object Files: this section contains a listing of all of the object files included in your final app, including your own code and that of any third-party libraries you've included. Importantly, each object file also lists the library where it came from;
Sections: this section, not relevant to your question, contains a list of the processor segments and their sections;
Symbols: this section contains the raw data that you're interested in: a list of all symbols/methods with their absolute location (i.e. address in the processor's memory map), size, and most important of all, a cross-reference to their containing object module (under the 'File' column).
From this raw data, you have everything you need to do the required size calculation. From #1, you see that, for every library, there are N possible constituent object modules; from #2, you see that, for every object module, there are M possible symbols, each occupying size S. For any given library, then, your rough order of size will be something like O(N * M * S). That's only to give you an indication of the components that would go into your actual calculations, it's not any sort of a useful formula. To perform the calculation itself, I'm sorry to say that I'm not aware of any existing tools that will do the requisite processing for you, but given that the link map is just a text file, with a little script magic and ingenuity you can construct a script to do the heavy lifting.
For example, I have a little sample project that links to the following library: https://github.com/ColinEberhardt/LinqToObjectiveC (the sample project itself is from a nice tutorial on ReactiveCocoa, here: http://www.raywenderlich.com/62699/reactivecocoa-tutorial-pt1), and I want to know how much space it occupies. I've generated a link map, TwitterInstant-LinkMap-normal-x86_64.txt (it runs in the simulator). In order to find all object modules included by the library, I do this:
$ grep -i "libLinqToObjectiveC.a" TwitterInstant-LinkMap-normal-x86_64.txt
which gives me this:
[ 8] /Users/XXX/Library/Developer/Xcode/DerivedData/TwitterInstant-ecppmzhbawtxkwctokwryodvgkur/Build/Products/Debug-iphonesimulator/libLinqToObjectiveC.a(LinqToObjectiveC-dummy.o)
[ 9] /Users/XXX/Library/Developer/Xcode/DerivedData/TwitterInstant-ecppmzhbawtxkwctokwryodvgkur/Build/Products/Debug-iphonesimulator/libLinqToObjectiveC.a(NSArray+LinqExtensions.o)
[ 10] /Users/XXX/Library/Developer/Xcode/DerivedData/TwitterInstant-ecppmzhbawtxkwctokwryodvgkur/Build/Products/Debug-iphonesimulator/libLinqToObjectiveC.a(NSDictionary+LinqExtensions.o)
The first column contains the cross-references to the symbol table that I need, so I can search for those:
$ cat TwitterInstant-LinkMap-normal-x86_64.txt | grep -e "\[ 8\]"
which gives me:
0x100087161 0x0000001B [ 8] literal string: PodsDummy_LinqToObjectiveC
0x1000920B8 0x00000008 [ 8] anon
0x100093658 0x00000048 [ 8] l_OBJC_METACLASS_RO_$_PodsDummy_LinqToObjectiveC
0x1000936A0 0x00000048 [ 8] l_OBJC_CLASS_RO_$_PodsDummy_LinqToObjectiveC
0x10009F0A8 0x00000028 [ 8] _OBJC_METACLASS_$_PodsDummy_LinqToObjectiveC
0x10009F0D0 0x00000028 [ 8] _OBJC_CLASS_$_PodsDummy_LinqToObjectiveC
The second column contains the size of the symbol in question (in hexadecimal), so if I add them all up, I get 0x103, or 259 bytes.
Even better, I can do a bit of stream hacking to whittle it down to the essential elements and do the addition for me:
$ cat TwitterInstant-LinkMap-normal-x86_64.txt | grep -e "\[ 8\]" | grep -e "0x" | awk '{print $2}' | xargs printf "%d\n" | paste -sd+ - | bc
which gives me the number straight up:
259
Doing the same for "\[ 9\]" (13016 bytes) and "\[ 10\]" (5503 bytes), and adding them to the previous 259 bytes, gives me 18778 bytes.
You can certainly improve upon the stream hacking I've done here to make it a bit more robust (in this implementation, you have to make sure you get the exact number of spaces right and quote the brackets), but you at least get the idea.
Make a .ipa file of your app and save it in your system.
Then open the terminal and execute the following command:
unzip -lv /path/to/your/app.ipa
It will return a table of data about your .ipa file. The size column has the compressed size of each file within your .ipa file.
I think you should be able to extract the information you need from this:
symbols -w -noSources YourFileHere
Ref: https://devforums.apple.com/message/926442#926442
IIRC, it isn't going to give you clear summary information on each lib, but you should find that the functions from each library should be clustered together, so with a bit of effort you can calculate the approximate contribution from each lib:
Also make sure that you set Generate Debug Symbols to NO in your build settings. This can reduce the size of your static library by about 30%.
In case it's part of your concern, a static library is just the relevant .o files archived together plus some bookkeeping. So a 1.7mb static library — even if the code within it is the entire 1.7mb — won't usually add 1.7mb to your product. The usual rules about dead code stripping will apply.
Beyond that you can reduce the built size of your code. The following probably isn't a comprehensive list.
In your target's build settings look for 'Optimization Level'. By switching that to 'Fastest, Smallest -Os' you'll permit the compiler to sacrifice some speed for size.
Make sure you're building for thumb, the more compact ARM code. Assuming you're using LLVM that means making sure you don't have -mno-thumb anywhere in your project settings.
Also consider which architectures you want to build for. Apple doesn't allow submission of an app that supports both ARMv6 and the iPhone 5 screen and have dropped ARMv6 support entirely from the latest Xcode. So there's probably no point including that at this point.
I'm playing around with a binary and when I load it into my debugger, or even run readelf, I noticed the entry point is 0x530 instead of the usual 0x80****** that'd learned ELF's were loaded at.
Why is this? Is there anything else going on? The binary is linked and not stripped.
instead of the usual 0x80****** that'd learned ELF's were loaded at.
You learned wrong.
While 0x804800 is the usual address that 32-bit x86 Linux binaries are linked at, that address is by no means universal or special.
64-bit x86_64 and aarch64 binaries are linked at default address of 0x40000, and powerpc64le binaries at default address of 0x10000000.
There is no reason a binary could not be linked at any other (page-aligned) address (so long as it is not 0, and allows for sufficient stack at the high end of the address space.
Why is this?
The binary was likely linked with a custom linker script. Nothing wrong with that.
As mentioned by Employed, the entry address is not fixed.
Just to verify, I've tried on x86_64:
gcc -Wl,-Ttext-segment=0x800000 hello_world.c
which sets the entry point to 0x800000 (+ the ELF header size, which gets loaded at 0x800000 in memory) instead of the default 0x400000.
Then both:
readelf -h a.out
and gdb -ex 'b _start' tell me the entry is at 0x800440 as expected (the header is 0x440 bytes).
This is because that value is an input that tells the Linux kernel where to set the PC when forking a new process.
The default 0x400000 comes from the default linker script used. You can also modify the linker script as mentioned in https://stackoverflow.com/a/31380105/895245 , change 0x400000 there, and use the new script with -T script
If I put it at anything below 0x200000 (2Mb) exactly or other low addresses, the program gets killed. I think this is because ld always loads the sections at multiples of 2Mb, which is the largest page size supported (in huge page), so anything lower starts at 0, which is bad: Why is the ELF execution entry point virtual address of the form 0x80xxxxx and not zero 0x0?
I'm looking for a nice easy way to find what sectors occupy a given file. My language preference is C#.
From my A-Level Computing class I was taught that a hard drive has a lookup table on the first few KB of the disk. In this table there is a linked list for each file detailing what sectors that file occupies. So I'm hoping there's a convinient way to look in this table for a certain file and see what sectors it occupies.
I have tried Google'ing but I am finding nothing useful. Maybe I'm not searching for the right thing but I can't find anything at all.
Any help is appreciated, thanks.
About Drives
The physical geometry of modern hard drives is no longer directly accessible by the operating system. Early hard drives were simple enough that it was possible to address them according to their physical structure, cylinder-head-sector. Modern drives are much more complex and use systems like zone bit recording , in which not all tracks have the same amount of sectors. It's no longer practical to address them according to their physical geometry.
from the fdisk man page:
If possible, fdisk will obtain the disk geometry automatically. This is not necessarily the physical disk geometry (indeed, modern disks do not really have anything
like a physical geometry, certainly not something that can be described in simplistic Cylinders/Heads/Sectors form)
To get around this problem modern drives are addressed using Logical Block Addressing, which is what the operating system knows about. LBA is an addressing scheme where the entire disk is represented as a linear set of blocks, each block being a uniform amount of bytes (usually 512 or larger).
About Files
In order to understand where a "file" is located on a disk (at the LBA level) you will need to understand what a file is. This is going to be dependent on what file system you are using. In Unix style file systems there is a structure called an inode which describes a file. The inode stores all the attributes a file has and points to the LBA location of the actual data.
Ubuntu Example
Here's an example of finding the LBA location of file data.
First get your file's inode number
$ ls -i
659908 test.txt
Run the file system debugger. "yourPartition" will be something like sda1, it is the partition that your file system is located on.
$sudo debugfs /dev/yourPartition
debugfs: stat <659908>
Inode: 659908 Type: regular Mode: 0644 Flags: 0x80000
Generation: 3039230668 Version: 0x00000000:00000001
...
...
Size of extra inode fields: 28
EXTENTS:
(0): 266301
The number under "EXTENTS", 266301, is the logical block in the file system that your file is located on. If your file is large there will be multiple blocks listed. There's probably an easier way to get that number, I couldn't find one.
To validate that we have the right block use dd to read that block off the disk. To find out your file system block size, use dumpe2fs.
dumpe2fs -h /dev/yourPartition | grep "Block size"
Then put your block size in the ibs= parameter, and the extent logical block in the skip= parameter, and run dd like this:
sudo dd if=/dev/yourPartition of=success.txt ibs=4096 count=1 skip=266301
success.txt should now contain the original file's contents.
sudo hdparm --fibmap file
For ext, vfat and NTFS ..maybe more.
fibmap is also a linux C library.
One of my favorite features of Google docs is the fact that it's constantly automatically saving versions of my document as I work. This means that even if I forget to save at a certain point before making a critical change there's a good chance that a save point has been created automatically. At the very least, I can return the document to a state prior to the mistaken change and continue working from that point.
Is there a tool with an equivalent feature for a Ruby coder running on Mac OS (or UNIX)?
For example, a tool that will do an automatic Git check-in every couple of minutes to my local repository for the files I'm working on. Maybe I'm paranoid, but this small bit of insurance could put my mind at ease during my day-to-day work.
VIM
Some may hate my response to this, but I use VIM quite often when coding and it has an auto-save feature, albeit an auto-save to a swap file. It is also extensible so that automatic commits can be done.
To see how extensible VIM is, check out this post: How can I script vim to run perltidy on a buffer?
The Netbeans IDE has a local history, which is enabled by default. Each time you save the file (ctrl-s), an entry is added to the file's history. As for the other supported VCSs, you can browse the history, see the diffs, and revert to a previous state.
Plus, Netbeans is known for having a really good support for Ruby developpment.
RubyMine automatically saves as you type and automatically saves a history of local changes. As far as I know, it won't automatically commit to Git, but it does integrate with Git. RubyMine works quite well on Mac OS X.
If you go down this “autocommit” road, always be sure to keep such history local. As Russell Steen commented, automatic checkpoints are not something that belongs in any kind of published, advertised branch. It is fine to keep for local reference, but otherwise it is just an ungroomed mess unfit for publication.
It is not too hard to write a simple script that will ‘autocommit’ to an specified branch. The linked script is not one that I use, just one that I found. It is a bit ugly in that it forcibly changes branches, so you would have to make sure it does not run if you are doing stuff manually. Also, it uses ‘porcelain’ Git commands instead of sticking to the lower-level (but correspondingly, more interface-stable) ‘plumbing’ commands.
You might also be interested enough to review a recent thread on the Git mailing list that covered some of this ground.
In particular, it references another script that does not “steal the current branch” and does a better job of using plumbing commands (but inexplicably, still uses git add instead of git ls-files and git update-index).
All in all, writing a script to do what you want is not terribly difficult. Doing it right (using plumbing, not stomping on the active branch (which is easy when you using plumbing), etc.) is a bit more effort, but worth it for the bits of Git that you will learn along the way.
You could even use the old shell implementation of git-commit as a starting point (and a good example of the plumbing and how to use it).
To get a checkpoint on a regular basis, just a stick script like ones of these it in a crontab.
Also Vim :) But what I use is much different from what kzh wrote.
I don't use a version control system for my automatic backup files, I rather
want to see all versions of all files in my ~/vim_backup directory. (Note:
of course I do use git for version controlling my source files, but now we are
talking about automatic backups.) The following lines in my .vimrc will make
Vim create a backup file each time the file is saved. The name of the backup
file is the name of the normal file plus the current date and time.
The following lines are in my .vimrc file:
if v:version >= 700
" If I have a ~/vim_backup directory, let that be the backup "
" directory; otherwise do not back up. "
" (Vim version earlier than 7.0 do not have the finddir function.) "
if finddir("~/vim_backup") != ""
set bdir=~/vim_backup/
set backup
else
set nobackup
endif
endif
" We set the 'backupext' option to contain the current date and time, so "
" that the name of the backup file will be the concatenation of the name "
" of the normal file and the current date and time. "
function! HRefreshBackup()
execute ":set backupext=" . strftime(".%y%m%d_%H%M")
" You may want to have %H%M%S instead of %H%M if you want to have the "
" possibility of having multiple backups in a minute. "
endfunction
" Refresh the backup file name before each "save" "
au BufWritePre * call HRefreshBackup()
Example of how my ~/vim_backup directory looks like:
$ ls -ltr ~/vim_backup | head
total 105692
-rwx--x--x 1 hcs hcs 252 2009-12-19 06:49 Sync_flash.091222_0902
-rwxr-xr-x 1 hcs hcs 819 2009-12-19 06:49 hk.091229_1637
-rwxr-xr-x 1 hcs hcs 507 2009-12-19 06:49 FOLLOW_LINK.091220_0802
-rwx--x--x 1 hcs hcs 212 2009-12-19 06:49 Cut.091230_2113
-rwxr-xr-x 1 hcs hcs 275320 2009-12-19 06:51 margitka.100116_1949
-rw-r--r-- 1 hcs hcs 80 2009-12-19 06:51 localrc_dorsu.vim.100101_1112
-rwx--x--x 1 hcs hcs 10335 2009-12-19 06:51 Video.091222_1754
-rwxr-xr-x 1 hcs hcs 1255 2009-12-19 06:51 Update.091222_1754
-rwxr-xr-x 1 hcs hcs 716 2009-12-19 06:51 SshMaker.091222_1754
This wastes some disk space, since all versions of all files are stored
. So it may be worth to periodically archive and compress them. I tried several
tools to compress the set of backup files, and "rar" with the "-s" option was
the best. 100 megabytes of backup files were created in the last three month,
and "rar -s" compressed it to 3 MB.
I use the Eclipse IDE for several languages and for all projects which do not have a GUI. One nice bonus is a local history and the ability to compare the current version with any previous version, either completely replacing the current version with any given previous version, or copying across some changes and ignoring others.
Have a look at this for using Ruby in Eclipse - http://www.ibm.com/developerworks/opensource/library/os-rubyeclipse/