How to see the PATH inside a shell without opening a shell - nix

Use the command flag looked like a solution but it doesn't work
Inside the following shell:
nix shell github:nixos/nixpkgs/nixpkgs-unstable#hello
the path contain a directory with an executable hello
I've tried this:
nix shell github:nixos/nixpkgs/nixpkgs-unstable#hello --command echo $PATH
I can't see the hello executable
My eyes are not the problem.
diff <( echo $PATH ) <( nix shell github:nixos/nixpkgs/nixpkgs-unstable#hello --command echo $PATH)
It see no difference. It means that the printed path doesn't not contains hello.
Why?

The printed path does not contain hello because if your starting PATH was /nix/var/nix/profiles/default/bin:/run/current-system/sw/bin, then you just ran:
nix shell 'github:nixos/nixpkgs/nixpkgs-unstable#hello' --command \
echo /nix/var/nix/profiles/default/bin:/run/current-system/sw/bin
That is to say, you passed your original path as an argument to the nix shell command, instead of passing it a reference to a variable for it to expand later.
The easiest way to accomplish what you're looking for is:
nix shell 'github:nixos/nixpkgs/nixpkgs-unstable#hello' --command \
sh -c 'echo "$PATH"'
The single quotes prevent your shell from expanding $PATH before a copy of sh invoked by nix is started.
Of course, if you really don't want to start any kind of child shell, then you can run a non-shell tool to print environment variables:
nix shell 'github:nixos/nixpkgs/nixpkgs-unstable#hello' --command \
env | grep '^PATH='

Related

Path is different depending on how you connect to container

I have an Alpine docker container and depending on how I connect using ssh the path is different. If I connect using a PTY shell:
ssh root#localhost sh -lc env | grep PATH
this prints:
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
However if don't use this shell:
ssh root#localhost sh -c env | grep PATH
this prints:
PATH=/bin:/usr/bin:/sbin:/usr/sbin
Why is this happening? What do I need to do so that the second command produces the same output as the first command?
With sh -l you start a login shell:
When invoked as an interactive login shell, or a non-interactive shell with the --login option, it first attempts to read and execute commands from /etc/profile and ~/.profile, in that order. The --noprofile option may be used to inhibit this behavior.
...
A non-interactive shell invoked with the name sh does not attempt to read any other startup files.
From https://linux.die.net/man/1/sh
That is you can probably edit the profile files to make the login shell behave similar to noprofile but it might become difficult the other way around.
I'll answer my own question. This stack overflow post has the main info needed: Where to set system default environment variables in Alpine linux?
Given that, there are two alternatives:
Declare PATH using the ENV option of the Dockerfile
Or add PermitUserEnvironment yes to sshd_config file and define PATH in ~/.ssh/environment

Access files written in docker volumes from the host

I have a docker container writing logfiles to a name volume.
From the host I want to analyce the logfiles and search for given log messages. But when I access the folder which 'docker inspect VOLUMNAME' gives, I get strange behavior, which I do not understand.
e.g. following command does give empty lines as output:
user#docker-host-01:~/docker-server-env/otaya-designdb$ sudo bash -c "for logfile in /var/lib/docker/volumes/design-db-logs/_data/*/*; do echo ${logfile}; done"
user#docker-host-01:~/docker-server-env/otaya-designdb$
What could be the reason?
Your local shell is expanding the variable expansion inside the double quotes before the loop happens. Change the double quotes to single quotes.
That is, when you run
sudo bash -c "for ... ; do echo ${logfile}; done"
first your local shell replaces the variable reference with whatever your local environment has set for $logfile, probably nothing
sudo bash -c 'for ...; do echo ; done'
and then it runs that command. If you change this to single quotes initially
sudo bash -c 'for ... ; do echo ${logfile}; done'
it will avoid this expansion.
You can see this just by putting the word echo at the front of the command: the shell will do its expansion, and then echo will print out the command that would have run.

How can I start a bash login shell in jenkins pipeline (formerly known as workflow)?

I am just starting to convert my Jenkins jobs into the new Jenkins Pipeline(workflow) tool, and I'm having trouble getting the sh command to use a bash login shell.
I've tried
sh '''
#!/bin/bash -l
echo $0
'''
but the echo $0 command is always being executed in an interactive shell, rather then a bash login shell.
#izzekil is right!!!! Thank you so much!
So to elaborate a little bit about what is going on. I used sh with ''' , which indicates a multiple line script. HOWEVER, the resulting shell script that gets dumped on to the jenkins node will be one line down, rather then the first line. So I was able to fix this with this
sh '''#!/bin/bash -l
echo $0
# more stuff I needed to do,
# like use rvm, which doesn't work with shell, it needs bash.
'''

How to run a command on the startup of an xterm?

How can I run a command on xterm startup i.e. when an xterm terminal is launched a the command is already executed?
I have edited the .bashrc file to add this line:
xterm "ls"
But this does not work.
Please suggest what should I do to acheive this.
Thanks.
According to the bash manual, ~/.bashrc is used for interactive shells. xterm runs a shell, so perhaps your "does not work" causes a chain of xterm's.
The xterm program sets these environment variables which are useful for scripting: XTERM_VERSION and XTERM_SHELL. In your ~/.bashrc file, you could use the former to run the xterm -ls once only:
if [[ -z "$XTERM_VERSION" ]]
then
xterm -hold -e ls &
fi
which seems to be what you are asking for:
it would run an xterm if not run from an existing xterm
it prevents the xterm from closing when the ls is done.
A more useful-seeming way of showing an ls on shell startup would be to run ls in each shell as it is started (for that case, you do not need run a separate xterm). Again, you can use environment variables to do this once (in case you run bash to make a subshell):
if [[ -z "$XTERM_ONCE" ]]
then
export XTERM_ONCE=$(date)
ls
fi
I use this:
-e /bin/bash -login
-e command [arguments]
Run the command with its command-line arguments in the rxvt window;
also sets the window title and icon name to be the basename of the
program being executed if neither -title (-T) nor -n are given on the
command line. If this option is used, it must be the last on the
command-line. If there is no -e option then the default is to run the
program specified by the SHELL environment variable or, failing that,
sh(1).
http://linux.die.net/man/1/rxvt

Why is capistrano interpreting a flag passed with a command to `run` as input?

I'm trying to do this:
run "echo -n 'foo' > bar.txt"
and the contents of bar.txt ends up being:
-n foo \n
(With \n representing an actual newline)
I use run for other commands like rm -rf and, to my knowledge, it works fine.
I just found this in man echo:
Some shells may provide a builtin echo command which is similar or identical to this utility. Most notably, the builtin echo in sh(1) does not accept the -n option. Consult the builtin(1) manual page.
My version of bash has an echo builtin but seems to be respecting the -n flag. It looks like the shell on your deployment machine doesn't, in which case using the full path to the echo binary might do what you want here:
run "/bin/echo -n 'foo' > bar.txt"
It appears as though the -n flag isn't being interpreted as a flag by the shell. If, from the command line, one executes echo -Y hi, the output will be -Y hi.

Resources