What is the difference between strip --remove-section vs objcopy --remove-section - binutils

Both strip and objcopy binary utilities support [--remove-section=sectionname] option.
Is there a difference between the two options ? Are there cases when one should be preferred over the other?

Related

Bazel select condition for a specific GCC major-version

We have to pass a special linkopts flag to cc_library rules that use <filesystem>, specifically for the GCC version that ships with Debian 10 (gcc 8.3).
I don't want to make developers pass a --config=old_gcc or similar at the top level.
I was hoping an incantation kind of like this would work:
linkopts = select({
"#bazel_tools//tools/cpp:gcc": ["-lstdc++fs"],
"//conditions:default": [],
}),
But a) gcc is not a configurable attribute that select() can use and b) we more specifically should test the version number is 8 (we'll only support 8 or above).
How do I extract a is_gcc8-like config_setting I can select on like this for targets using <filesystem>? TIA!
One way to do this is to change to using a manual CROSSTOOL setup instead of relying on the automatic crosstool setup (documentation here). This would allow you to specify a set of linker flags to apply when compiling with a certain combination of --cpu and --compiler.

What does .text.unlikely mean in ELF object files?

In my objdump -t output, I see the following two lines:
00000000000004d2 l F .text.unlikely 00000000000000ec function-signature-goes-here [clone .cold.427]
and
00000000000018e0 g F .text 0000000000000690 function-signature-goes-here
I know l means local and g means global. I also know that .text is a section, or a type of section, in an object file, containing compiled program instructions. But what is .text.unlikely? Assuming it's a different section (or type-of-section) from .text - what's the difference?
In my GCC v5.4.0 manpage, I found the following switch:
-freorder-functions
which says:
Reorder functions in the object file in order to improve code
locality. This is implemented by using special subsections
".text.hot" for most frequently executed functions and
".text.unlikely" for unlikely executed functions. Reordering is done
by the linker so object file format must support named sections and
linker must place them in a reasonable way.
Also profile feedback must be available to make this option effective.
See -fprofile-arcs for details.
Enabled at levels -O2, -O3, -Os.
Looks like the compiler was run with optimization flags or that switch for this binary, and functions are organized in subsections to optimize spatial locality.

Silversearcher/ack vs find,grep

Currently when I have to search for complex patterns in code, I typically use a combination of find and grep in the form:
find / \( -type f -regextype posix-extended -regex '.*python3.*py' \) -exec grep -EliI '\b__[[:alnum:]]*_\b' {} \; -exec cat {} \; > ~/python.py
While this looks a long term to type, its actually quite short if you use zsh. I just type f (the first character), and go directly to this command from my command history. Further the regex in find/grep is standardized and tested, so there are no surprises, or missing searches.
ripgrep/ag etc etc are new software, which mightnot be supported a few years down the line when the original maintaner loses interest.
is there any plan to include .gitignore rules or optimizations in ag/ack/rg in grep/other version of grep? Is there any reason why these optimizations were/are not going to be included in grep?
For those of you who switched over: Did you guys find it worthwhile to switch over to rg/ag/ack especially because there is going to be a learning curve for these tools as well?
Use ag.
The key part of your example: ag -G '.*python3.*py' '\b__[[:alnum:]]*_\b'
Ag is here to stay and uses Perl regex (PCRE) which is far more flexible than POSIX basic or extended Regular Expressions. Grep -P uses the Perl regex engine, so this just akin to using ag, without some of the later's more modern features. Likewise, ack is like ag but is slower (though admittedly has a few more bells and whistles). Ag's file regexes filtering (the -G flag as exemplified above) and built-in file types filters are very handy (e.g. --python). The recently renamed .ignore file also provides finer tuning.
Since most modern scripting languages have PCRE or handle regexes with similar features in PCRE (perl, python, ruby), as do many full languages (java, C++) have near equivalent feature sets (e.g java.util.regex, Boost.Regex), I consider this the main reason to switch. Moreover, it is satisfying to unify your programming with you commandline skillset.
From my point of view, ripgrep is ag's main contender because it is faster and has an easy way to add file types. That said, it doesn't have as flexible a regex engine: no backreferences nor look-arounds. With this is mind, I recommend Ag.

Why are some command options with one dash and others are with two dashes

Some command options are with one dash e.g. ruby -c (check syntax) and ruby --copyright (print copyright). Is there any pattern to this?
These are known as short and long options. Which name/format a developer uses for options of his program is totally up to him.
However, there are some widespread conventions. Like -v/--version for printing version number, -h/--help for printing usage instructions, etc.
Sadly, most commandline tools on OSX seem not to conform to -v/-h.
Good CLI (command-line interface) design dictates that options of a program that are most useful should have two formats, short and long. You use short format in your everyday life (because it's faster to type).
ps aux | grep ruby
Long ones are for scripts that you write and rarely touch (they're easier to read and understand).
mongod --logpath /path/to/logs --dbpath /path/to/db --fork --smallfiles
Many less used options may have only the long version (because, you know, there are only 26 letters in latin alphabet).
On many rails commands there is a pattern. One dash is an abbreviation for a two dash option, e.g. rspec -o FILE is a synonym for rspec --out FILE.

Tool for comparision and generate report

I'm lookig for a tool / program where I can generate report say with the information like lines added,modified,deleted. I'm not looking for araxis merge or diff merge kind of tools , instead the tools that can generate report for me in html /text format. A Non '.EXE' kind of tool is appreciable , since I have software restrictions. Yes I still use tools in exectuable jar, bat etc.. kind
If you are on a windows computer you have fc.exe which is a builtin dos app.
FC [/A] [/C] [/L] [/LBn] [/N] [/OFF[LINE]] [/T] [/U] [/W] [/nnnn]
[drive1:][path1]filename1 [drive2:][path2]filename2
FC /B [drive1:][path1]filename1 [drive2:][path2]filename2
/A Displays only first and last lines for each set of differences.
/B Performs a binary comparison.
/C Disregards the case of letters.
/L Compares files as ASCII text.
/LBn Sets the maximum consecutive mismatches to the specified
number of lines.
/N Displays the line numbers on an ASCII comparison.
/OFF[LINE] Do not skip files with offline attribute set.
/T Does not expand tabs to spaces.
/U Compare files as UNICODE text files.
/W Compresses white space (tabs and spaces) for comparison.
/nnnn Specifies the number of consecutive lines that must match
after a mismatch.
[drive1:][path1]filename1
Specifies the first file or set of files to compare.
[drive2:][path2]filename2
Specifies the second file or set of files to compare.
Maybe it could work?

Resources