GNU Screen: Environment variables - environment-variables

[Updated]
The question is related to the questions GNU Screen: Programmers quotes in Readbuf and GNU Screen: files to numbered buffers?. Since they are not solved, the question targets more general concept about environment variables. My belief is that they are the key to make Screen more efficient.
1. How can I use Bash's variables in Screen like:
$ export path=`pwd`
$ ^a :readbuf `echo $path`/debugging_code.php
2. How can I reuse Screen's buffers like:
$ ^a :readreg a `echo $path`
$ ^a :readbuf $a/debugging_code.php
$ ^a ]
3. How can I use Screen's buffers like environment variables?

The following command does not create a new screen session, but it will create a screen internal variable. Running it on the command line allow you to use the shell expansion:
$ screen -X setenv a "$PWD/debugging_code.php"
Then use the new variable:
C-a :readbuf $a

I have made a patch to screen 4.0.3 that supports the following syntax:
^A :readbuf !shell-command
This allows you to exec any arbitrary shell command and pipe the output into the screen buffer. Note that this is implemented by executing a subshell using popen and copying the standard output to the current file specified in the bufferfile setting (and then reading that file), so be careful you don't overwrite something you don't intend to. Also, this patch is probably terribly insecure so please use it at your own risk.
An example might be:
^A :readbuf !cat $HOME/projects/foobar/file.txt
Any shell command is executed literally as typed.
See gnu-screen-readbuf-exec on Github for the Git repository containing the patch.

Related

Can't use commands from ipython or julia repls with zsh

When I try to run a shell command in ipython or the julia repl it just says
shell> ls
zsh:1: command not found: ls
Not sure if it matters, but I have my path set in zshenv instead of zshrc so that emacs shell works.
Any ideas?
Edit:
I'm on macOS 10.14.6
For Julia, The shell> REPL prompt does in fact use a shell to execute its commands (on non-Windows systems). It effectively does something like run(`$shell -c ls`), and for most shells (including zsh) this means "non-interactive" mode and limits the number of init files that get loaded. You want to make sure your shell is working in this mode; I'd guess that if you type zsh -c ls at your terminal it'll be similarly broken.
Alternatively, you can customize which shell Julia uses through an environment variable. Setting JULIA_SHELL=/bin/sh is probably a safe bet — Julia uses that environment variable if it is set, otherwise it uses SHELL, and finally it falls back to /bin/sh if neither is set.
I'm not as familiar with ipython, but I'd wager it's doing something similar.

TCL exec echo does not work with enviornment varaibles

Let us say that I have environment variable PO, with value 1.If I use the LINUX echo command I get:
>echo $PO
1
However, if I use TCL and exec, I do not get interpolation:
>exec echo "\$PO"
$PO
Now, if I do something more elaborate, by using regsub to replace every ${varname} with [ lindex array get env varname ] 0 ], and use substr, it works:
>subst [ regsub -all {\$\{(\S+?)\}} "\${PO}/1" "\[ lindex \[ array get env \\1 \] 1 \]" ]
1/1
I have some corner cases, sure. But why is the exec not giving back what the shell would do?
why is the exec not giving back what the shell would do?
Because exec is not a shell.
When you do echo $PO from a shell, echo is not responsible for resolving the value. It is the shell that converts $PO to the value 1 before calling echo. echo never sees $PO when calling it from the shell.
If you are trying to emulate what the shell does, then you need to do the same work as the shell (or, invoke an actual shell to do the work for you).
Tcl is a lot more careful about where it does interpolation than Unix shells normally are. It keeps environment variables out of the way so that you don't trip over them by accident, and does far less processing when it invokes a subprocess. This is totally by design!
As much as possible (with a few exceptions) Tcl passes the arguments to exec through to the subprocesses it creates. It also has standard mechanisms for quoting strings so that you can control exactly what substitutions happen before the arguments are actually passed to exec. This means that when you do:
exec echo "\$PO"
Tcl is going to do its normal substitution rules and get these exact arguments to the command dispatch: exec, echo, and $PO. This then calls into the exec command, which launches the echo program with one argument, $PO, which does exactly that. (Shells usually substitute the value first.) If you'd instead done:
exec echo {$PO}
you would have got the same effect. Or even if you'd done:
exec {*}{echo $PO}
You still end up feeding the exact same characters into exec as its arguments. If you want to run the shell on it, you should explicitly ask for it:
exec /bin/sh -c {echo $PO}
The bit in the braces there is a full (small) shell script, and will be evaluated as such. And you could do this even:
exec /bin/sh -c {exec echo '$PO'}
It's a bit of a useless thing to do but it works.
You can also do whatever substitutions you want from your own code. My current favourite from Tcl 8.7 (in development) is this:
exec echo [regsub -all -command {\$(\w+)} "\$PO" {apply {- name} {
global env
return $env($name)
}}]
OK, total overkill for this but since you can use any old complex RE and script to do the substitutions, it's a major power tool. (You can do similar things with string map, regsub and subst in older Tcl, but that's quite a bit harder to do.) The sky and your imagination are the only limits.

Why is my terminal output not identical when running a yarn script vs its bash equivalent?

**NOTE: I've added updates in order, just keep reading, thanks. :) **
I've been very curious about this -- please see this screenshot of me running:
ls -lah build, and
yarn run assets, which runs ls -lah build.
Let me start by saying that this is a WIP build in webpack, so no need to tell me that a 31M bundle is less than optimal. :)
But why do I get the colors and the more detailed font with the native command and not when yarn executes the command? It may be relevant: this screen shot is:
- Windows 10
- Webstorm terminal
- logged in to a docker container running Ubuntu 14.4
Thanks! :)
** UPDATE: --color=always restores color **
As #Charles Duffy suggested, adding --color=always in the yarn script preserved the formatting:
If anyone has some specialized knowledge to share about what's going on here, I'm in the market to hear it! Thanks!
Short(ish) answer: What's actually going on?
The below answer assumes the GNU implementation of ls.
There are a few possibilities at play:
Your interactive terminal's options may be modified by a shell alias. Output from type ls will indicate whether this is true.
You may have ls --color=auto enabled, either via an alias or via an equivalent environment variable; regardless, this checks whether it's writing directly to a TTY, and only enables color if so.
If output is not direct to a TTY (for instance, if output is being captured by yarn before it's printed), ls --color=auto will not colorize.
To fix this, you can explicitly pass ls --color=always, or its equivalent, simply ls --color. This covers both cases: If you had an alias in use passing --color=auto on your behalf, passing it explicitly means you no longer need the alias. By contrast, if yarn is capturing content rather than passing it straight to the TTY, then --color=always tells ls to ignore isatty() returning false and colorize anyhow.
Background on what the above means:
A "TTY" is, effectively, a terminal. It provides bells and whistles (literally, for the bells) specialized for providing a device that a user is actually typing at. This means it has control sequences for inspecting and modifying cursor location, and -- pertinently for our purposes -- for changing the color with which output is rendered.
A "FIFO" is a pipe -- it moves characters from point A to point B, first-in, first-out. In the case of prog-one | prog-two, the thing that connects those two is a FIFO. It just moves characters, and has no concept of cursor location or colorization or anything else.
If ls tried to put color sequences in its output when that output is intended for any destination other than a terminal, those sequences wouldn't make any sense -- indeed, the very format in which colorization markers need to be printed is determined by the TERM variable specifying the currently active terminal type.
If you run ls --color, then, you're promising ls that its output really will be rendered by a terminal, or (at least) otherwise something that understands the color sequences appropriate to the currently configured TERM.

Inheriting environment variables with GNU Parallel

I would like to inherit environment variables in GNU Parallel. I have several 'scripts' (really just lists of commands, designed for use with GNU Parallel) with hundreds of lines each that all call different external programs. However, these external program (out of my control) requires that several environment variables be set before they will even run.
Setting/exporting them locally doesn't seem to help, and I don't see any way to add this information to a profile.
The documentation doesn't seem to have anything this, and similar SO pages suggest wrapping the command in a script. However, this seems like an inelegant solution. Is there a way to export the current environment, or perhaps specify the required variables in a script?
Thanks!
This works for me:
FOO="My brother's 12\" records"
export FOO
parallel echo 'FOO is "$FOO" Process id $$ Argument' ::: 1 2 3
To make it work for remote connections (through ssh) you need to quote the variable for shell expansion. parallel --shellquote can help you do that:
parallel -S server export FOO=$(parallel --shellquote ::: "$FOO")\;echo 'FOO is "$FOO" Process id $$ Argument' ::: 1 2 3
If that does not solve your issue, please consider showing an example that does not work.
-- Edit --
Look at --env introduced in version 20121022
-- Edit --
Look at env_parallel introduced in 20160322.

How to get ccache to not pass the full path to the compiler to distcc

(This is different to the question ccache and absolute path as I want only the command path to not be expanded on the ccache host machine)
When using ccache and distcc together ccache is expanding the compiler to an absolute path, and then distcc cannot use the PATH on the remote machine to choose which compiler to use.
e.g. I call CCACHE_PREFIX=distcc ccache g++ foo.cc and ccache expands this into a local preprocessing step and cache check and then a call to distcc as distcc /usr/bin/g++, which is the wrong version (g++ lives in the path on the remote before /usr/bin, but this doesn't give it the chance to search the path at all).
I have various different machines being used as distcc hosts, and they have the same version of gcc/g++ installed in different locations (yes, this problem goes away if I put them all in somewhere like /usr/local, but I can't do that at the moment).
Is there a setting to get ccache to pass just g++ to distcc rather than expanding the path to the absolute path of the local compiler? I'm not completely against patching ccache if there is no setting yet, but that's a last resort :)
Turns out there's a simple way to do this : just use a wrapper for CCACHE_PREFIX instead of distcc directly, with something like this:
File : distcc-wrap.sh
#!/bin/sh
compiler=$(basename $1)
shift
exec distcc "$compiler" "$#"
export CCACHE_PREFIX=distcc-wrap.sh and then this allows the remote compiler to live at a different place and distcc will search the PATH for it.
(Thanks to Joel on the ccache mailing list for this answer; see http://www.mail-archive.com/ccache#lists.samba.org/msg00670.html for original message)
I try David's solution but encounter problem of "distcc seems to have invoked itself recursively!" in distcc plain mode. This is because when host distcc do preprocessing (cpp), it will invoke host ccache, but distcc-wrap intercept and spawn a nested distcc, and forms a recursive call:
g++ -> ccache -> distcc -> distcc-wrap -> preprocess using g++ -> ccache -> distcc -> .... and so on.
My solution is to use DISTCC_CMDLIST, from man distccd:
DISTCC_CMDLIST
If the environment variable DISTCC_CMDLIST is set, load a list of supported commands from the file named by DISTCC_CMDLIST, and refuse to serve any command whose last DISTCC_CMDLIST_MATCHWORDS last words do not match those of a command in that list. See the comments in src/serve.c.
Assuming on remote machine you want to use /usr/local/ccache/g++ (which is a simulink to /usr/bin/ccache) to do the compilation, instead of using the absolute path expanded by host machine, you can do like this:
create a file /path/to/.distcc/DISTCC_CMDLIST with this line:
/usr/local/ccache/g++
export DISTCC_CMDLIST=/path/to/.distcc/DISTCC_CMDLIST
restart distccd daemon
distccd --no-detach -a <host IPs> --daemon
What will happen is whenever distcc remote server receive expanded command from host lke /usr/bin/g++ main.cc -c, it will maps the real compiler from /usr/bin/g++ to /usr/local/ccache/g++. The mapping is done by:
retrieve the basename from the compiler path in receiving command ( g++ in this case)
Lookup DIST_CMDLIST file to see if any line has basename equals to g++. In this case, it will be /usr/local/ccache/g++
overwrite the command to /usr/local/ccache/g++ main.cc -c. which will invoke ccache on remote server.
The above is just an example, and you can extend the compiler mapping by changing the value of DISTCC_CMDLIST_NUMWORDS from 1 to other values to do more tricks.

Resources