docker -w dir prefixed with another dir [duplicate] - docker

Earlier today, I was trying to generate a certificate with a DNSName entry in the SubjectAltName extension:
$ openssl req -new -subj "/C=GB/CN=foo" -addext "subjectAltName = DNS:foo.co.uk" \
-addext "certificatePolicies = 1.2.3.4" -key ./private-key.pem -out ~/req.pem
This command led to the following error message:
name is expected to be in the format /type0=value0/type1=value1/type2=... where characters may be escaped by . This name is not in that format: 'C:/Program Files/Git/C=GB/CN=foo'
problems making Certificate Request
How can I stop Git Bash from treating this string parameter as a filepath, or at least stop it from making this alteration?

The release notes to the Git Bash 2.21.0 update today mentioned this as a known issue. Fortunately, they also described two solutions to the problem:
If you specify command-line options starting with a slash, POSIX-to-Windows path conversion will kick in converting e.g. "/usr/bin/bash.exe" to "C:\Program Files\Git\usr\bin\bash.exe". When that is not desired -- e.g. "--upload-pack=/opt/git/bin/git-upload-pack" or "-L/regex/" -- you need to set the environment variable MSYS_NO_PATHCONV temporarily, like so:
MSYS_NO_PATHCONV=1 git blame -L/pathconv/ msys2_path_conv.cc
Alternatively, you can double the first slash to avoid POSIX-to-Windows path conversion, e.g. "//usr/bin/bash.exe".

Using MSYS_NO_PATHCONV=1 can be problematic if your script accesses files.
Prefixing with a double forward slash doesn't work for the specific case of OpenSSL, as it causes the first DN segment key to be read as "/C" instead of "C", so OpenSSL drops it, outputting:
req: Skipping unknown attribute "/C"
Instead, I used a function that detects if running on bash for Windows, and prefixes with a "dummy" segment if so:
# If running on bash for Windows, any argument starting with a forward slash is automatically
# interpreted as a drive path. To stop that, you can prefix with 2 forward slashes instead
# of 1 - but in the specific case of openssl, that causes the first CN segment key to be read as
# "/O" instead of "O", and is skipped. We work around that by prefixing with a spurious segment,
# which will be skipped by openssl
function fixup_cn_subject() {
local result="${1}"
case $OSTYPE in
msys|win32) result="//XX=x${result}"
esac
echo "$result"
}
# Usage example
MY_SUBJECT=$(fixup_cn_subject "/C=GB/CN=foo")

Found a workaround by passing a dummy value as the first attribute, for example: -subj '//SKIP=skip/C=gb/CN=foo'

I had the same issue using bash, but running the exact same command in Powershell worked for me. Hopefully this will help someone.

Related

.clang-tidy configuration file content is being ignored

I want to modify the checks that the code analyzer program clang-tidy is doing, but it seems like the content of the configuration file .clang-tidy is being ignored.
I create the file by calling clang-tidy with the flag -dump-config and redirect the output to the file .clang-tidy.
Then I call sed to replace the value 800 with the value 700, which corresponds to the option with key google-readability-function-size.StatementThreshold. The specific option is not important to me, this is just for testing.
I verify that the value has indeed been changed.
Lastly, I rerun clang-tidy to see if it has accepted the new configuration, but it remains unchanged.
# generate config
clang-tidy -dump-config > .clang-tidy
# change config
sed -i 's/800/700/' .clang-tidy
# verify change
grep '700' .clang-tidy
# use config, does not work
clang-tidy -config '' -dump-config
The CheckOption remains at the default value, the content of the config file has been ignored:
CheckOptions:
# some lines omitted for brevity
- key: google-readability-function-size.StatementThreshold
value: '800'
Running clang-tidy -config '' -dump-config -explain-config shows that the configuration file has at least been found, i.e. many clang-analyzer specific checks are enabled in the detected config file, but the check google-readability-function-size.StatementThreshold is not listed.
I also tried passing the config directly as command line parameter with the command clang-tidy -config="{CheckOptions: [ {key: google-readability-function-size.StatementThreshold, value: 700} ]}" -dump-config, but got the same result.
The command clang-tidy --version gives the following output, running on Ubuntu 20.04:
LLVM (http://llvm.org/):
LLVM version 10.0.0
Optimized build.
Default target: x86_64-pc-linux-gnu
Host CPU: haswell
To see the change, you need to enable the check:
Checks: 'google-readability-function-size'
You can see it changed in the effective configuration with:
clang-tidy --dump-config
Another pitfall to be aware of is that errors parsing the values will be silently discarded.

How can I nix-env install a derivation from a nix expression file?

I've got a default.nix file that builds a derivation (at least my understanding of it).
{ nixpkgs ? import <nixpkgs> {}, compiler ? "ghc864" } :
nixpkgs.pkgs.haskell.packages.${compiler}.callCabal2nix "bhoogle" (./.) {}
I can successfully nix-build this. Is there a way I can install this into my user profile with nix-env directly? For example something like nix-env -i -f default.nix.
Otherwise I need to define a package in my system profile with something like:
example = pkgs.callPackage /home/chris/example/default.nix {};
Quite literally my initial guess (thanks #Robert):
nix-env -i -f default.nix
Documented in nix --help:
--file / -f path
Specifies the Nix expression (designated below as the active Nix expression) used by the --install, --upgrade, and --query
--available operations to obtain
derivations. The default is ~/.nix-defexpr.
If the argument starts with http:// or https://, it is interpreted as the URL of a tarball that will be downloaded and
unpacked to a temporary location. The tarball
must include a single top-level directory containing at least a file named default.nix.

How to see the GNU debuglink value of an ELF file?

So I can add a link to a debug symbol file like this objcopy --add-gnu-debuglink=$name.dbg $name, but how can I later retrieve that value again?
I checked with readelf -a and grepped for \.dbg without any luck. Similarly I checked the with objdump -sj .gnu_debuglink (.gnu_debuglink is the section) and could see the value there:
$ objdump -sj .gnu_debuglink helloworld|grep \.dbg
0000 68656c6c 6f776f72 6c642e64 62670000 helloworld.dbg..
However, would there be a command that allows me to extract the retrieve the exact value again (i.e. helloworld.dbg in the above example)? That is the file name only ...
I realize I could use some shell foo here, but it seems odd that an option exists to set this value but none to retrieve it. So I probably just missed it.
You can use readelf directly:
$ readelf --string-dump=.gnu_debuglink helloworld
String dump of section '.gnu_debuglink':
[ 0] helloworld
[ 1b] 9
I do not know what the second entry means (it seems to always be different). To get rid of the header and the offsets, you can use sed:
$ readelf --string-dump=.gnu_debuglink helloworld | sed -n '/]/{s/.* //;p;q}'
helloworld
Something like this should work:
objcopy --output-target=binary --set-section-flags .gnu_debuglink=alloc \
--only-section=.gnu_debuglink helloworld helloworld.dbg
--output-target=binary avoids adding ELF headers. --set-section-flags .gnu_debuglink=alloc is needed because objcopy only writes allocated sections by default (with the binary emulation). And --only-section=.gnu_debuglink finally identifies the answer. See this earlier answer.
Note that the generated file may have a trailing NUL byte and four bytes of CRC, so some post-processing is needed to extract everything up to the first NUL byte (perhaps using head -z -n 1 helloworld.dbg | tr -d '\0' or something similar).

Why can't ld called from MSYS find (existing static) library when arguments are read from a response #file containing backslashes?

This is basically the same issue as in mingw ld cannot find some library which is exist in the search path, MinGW linker can't find MPICH2 libraries - and I'm aware that there are heaps of posts on StackOverflow regarding the issue of static and dynamic linking with MinGW - but I couldn't find anything that explains how I can troubleshoot.
I am building a project with a huge linker command like (via g++) on MinGW, in a MSYS2 shell (git-bash.exe). The process fails with, among others:
/z/path/to/../../../../i686-w64-mingw32/bin/ld.exe: cannot find -lssl
I add -Wl,--verbose to the g++ linker call (to be passed to ld), and I can see for the -L/z/path/to/libs/openssl/lib/mingw -lssl:
...
attempt to open /z/path/to/libs/openssl/lib/mingw/libssl.a failed
...
/z/path/to/libs/openssl/lib/mingw/ssl.dll failed
attempt to open /z/path/to/libs/openssl/lib/mingw\libssl.a failed
...
But this is weird, because the file exists?
$ file /z/path/to/libs/openssl/lib/mingw/libssl.a
/z/path/to/libs/openssl/lib/mingw/libssl.a: current ar archive
(... and it was built with the same compiler on the same machine)?
Weirdly, once it attempts to open with forward slash .../libssl.a, once with backslash ...\libssl.a - but at least the first path checks out in a bash shell, as shown above?
It gets even worse if I try to specify -l:libssl.a -- or if I specify -L/z/path/to/libs/openssl/lib/mingw -Wl,-Bstatic -lssl -- instead; then all attempts to open are with a backslash:
...
attempt to open /z/path/to/scripts/other/build/openssl/build/mingw/lib\libssl.a failed
attempt to open /z/path/to/libs/openssl/lib/mingw\libssl.a failed
...
To top it all off, if I look it up manually through the command line using ld, it is found ?!:
$ ld -L/z/path/to/libs/openssl/lib/mingw -lssl --verbose
attempt to open Z:/path/to/libs/openssl/lib/mingw/libssl.dll.a failed
attempt to open Z:/path/to/libs/openssl/lib/mingw/ssl.dll.a failed
attempt to open Z:/path/to/libs/openssl/lib/mingw/libssl.a succeeded
Does anyone have an idea why this happens, and how can I get ld to finally find these libraries? Or rather - how can I troubleshoot, and understand why these libraries are not found, when they exist at the paths where ld tries to open them?
OK, found something more - not sure if this is a bug; but my problem is that I'm actually reading arguments from a file (otherwise I get g++: Argument list too long). So, to simulate that:
$ echo " -Wl,--verbose -L/z/path/to/libs/openssl/lib/mingw -lssl -lcrypto " > tmcd3
$ g++ #tcmd3 2>&1 | grep succeeded | grep ssl
# nothing
$ g++ `cat tcmd3` 2>&1 | grep succeeded | grep ssl
attempt to open Z:/path/to/libs/openssl/lib/mingw/libssl.a succeeded
attempt to open Z:/path/to/libs/openssl/lib/mingw/libcrypto.a succeeded
... it turns out, if the very same arguments are fed on the command line, then static library lookup succeeds - but if the arguments are read from file through the # at-sign, then static library lookup fails?! Unfortunately, I cannot use on my actual project, since even with cat, I'd still get g++: Argument list too long ... So how can I fix this?
MSYS has special handling of directories as arguments when they are used in the shell. This translates e.g. /<drive_letter>/blabla to the proper Windows style paths. This is to accomodate Unix programs that don't handle Z: style directory root.
What you see here is that MSYS isn't performing this interpretation for string read from a file. When you think about it, it's very logical, but as you have experienced first-hand, also sometimes annoying.
Long story short: don't put Unix style paths in files with command arguments. Instead, pass them through e.g. cygpath -w, which works in MSYS2 (which should be the MSYS that Git for Windows 2+ comes with).
Ok, with some more experiments, I noticed that:
-L/z/path/to/libs/openssl/lib/mingw, the Unix path specification, tends to fail - while if we specify the same, except starting with a Windows drive letter, that is:
-LZ:/path/to/libs/openssl/lib/mingw, then things work - also from an arguments file with # at-sign:
$ echo " -Wl,--verbose -LZ:/path/to/libs/openssl/lib/mingw -lssl -lcrypto " > tmcd3
$ g++ #tcmd3 2>&1 | grep succeeded | grep ssl
attempt to open Z:/path/to/libs/openssl/lib/mingw/libssl.a succeeded
attempt to open Z:/path/to/libs/openssl/lib/mingw/libcrypto.a succeeded
I guess, since the shell is MSYS2/git-bash.exe, entering full POSIX paths on the shell with /z/... is not a problem, because the shell will convert them - but in a file, there is nothing to convert them, so we must use Windows/MingW convention to specify them...

wkhtmltopdf attempting to load from http rather than file

Here's an odd little problem that's led me to post my first question on SO. I am using wkhtmltopdf to convert an HTML document to a PDF as part of a Rails app. To do so, I am rendering the Rails web page to a static HTML file in a temp directory, copying a static header, footer and images to the same temp directory, then executing wkhtmltopdf using "system".
This works perfectly in Development and Test environments. In my Staging env, it does not. I suspected permissions at first, but the first couple of parts of that process (creating the HTML static files and copying them to the directory) are working. I can run wkhtmltopdf from the command line in that temp directory and get the expected outcome. Finally, I ran wkhtmltopdf via both "system" and backticks through the Rails console in staging environment, and here's what I get as output:
> `wkhtmltopdf --footer-html tmp/invoices/footer.html --header-html tmp/invoices/header.html -s Letter -L 0in -R 0in -T 0.5in -B 1in tmp/invoices/test.html tmp/invoices/this.pdf`
Loading pages (1/6)
QPainter::begin(): Returned false ] 10%
Error: Unable to write to destination
Error: Failed loading page http://tmp/invoices/test.html (sometimes it will work just to ignore this error with --load-error-handling ignore) => ""
Notice that last bit. I'm pointing to local files, but it's looking for them via http. OK, I think, maybe I need to be explicit and feed it the file:// protocol so it doesn't look for http. So I try this:
> system("wkhtmltopdf --footer-html file://Library/Server/Web/Data/Sites/intranet-staging/current/tmp/invoices/footer.html --header-html file://Library/Server/Web/Data/Sites/intranet-staging/current/tmp/invoices/header.html -s Letter -L 0in -R 0in -T 0.5in -B 1in file://Library/Server/Web/Data/Sites/intranet-staging/current/tmp/invoices/test.html file://Library/Server/Web/Data/Sites/intranet-staging/current/tmp/invoices/this.pdf")
Loading pages (1/6)
Error: Failed loading page file://library/Server/Web/Data/Sites/intranet-staging/current/tmp/invoices/test.html (sometimes it will work just to ignore this error with --load-error-handling ignore)
=> false
Notice that this one fails with a lowercase "l" on Library. What the heck? (And no, it doesn't get any better with the recommendation to ignore the error with that switch.)
Any ideas? Is there a Rails or Ruby setting that would cause system commands to get rewritten? Is there an option I can add to wkhtmltopdf to make sure it loads from local file? I'm quite baffled. Thanks!
I have had success when using the absolute file path (notice the extra slash after the file://)
wkhtmltopdf --footer-html file:///Library/Server/Web/Data/Sites/intranet-staging/current/tmp/invoices/footer.html --header-html file:///Library/Server/Web/Data/Sites/intranet-staging/current/tmp/invoices/header.html -s Letter -L 0in -R 0in -T 0.5in -B 1in file:///Library/Server/Web/Data/Sites/intranet-staging/current/tmp/invoices/test.html file:///Library/Server/Web/Data/Sites/intranet-staging/current/tmp/invoices/this.pdf
This is the same on windows
Unix path
file:///absolute/path/to/file
Windows path
file:///C:/absolute/path/to/file
In last 0.11 whicked-pdf i found one bug
Example
C:\Ruby193\lib\ruby\gems\1.9.1\gems\wicked_pdf-0.11.0\lib>wicked_pdf.rb
Line 198 I change from:
options[hf][:html][:url] = "file://#{tf.path}" to options[hf][:html][:url] = "file:///#{tf.path}" - (change // to ///)
After change whicked-pdf again worked.
Take a look at the wicked_pdf gem.
You can add a PDF mime type and then whatever page you want pdf'd, just tack on a .pdf to the URL.
I am using this in prod and it works quite well.
No need to call wkhtmltopdf directly.

Resources