My os: arch linux. I recently installed erlang (Erlang (BEAM) emulator version 9.0.1).
When I type "erl" command I expect the erlang shell to start invoking the commands I type (for the simplest example: 2+3. should return 5)
However, after I run "erl" nothing shows up on screen. I can type anything I want, nothing is executed. Images with example are attached.
What I expect.jpg
What I got.jpg
Now I only started to learn erlang, and this is confusing. Don't know if it is some repository bug, or erlang normal behavior?
UPD: I got some custom configuration in .bashrc file:
# setup color variables
color_is_on=
color_red=
color_green=
color_yellow=
color_blue=
color_white=
color_gray=
color_bg_red=
color_off=
color_user=
if [ -x /usr/bin/tput ] && tput setaf 1 >&/dev/null; then
color_is_on=true
color_black="\[$(/usr/bin/tput setaf 0)\]"
color_red="\[$(/usr/bin/tput setaf 1)\]"
color_green="\[$(/usr/bin/tput setaf 2)\]"
color_yellow="\[$(/usr/bin/tput setaf 3)\]"
color_blue="\[$(/usr/bin/tput setaf 6)\]"
color_white="\[$(/usr/bin/tput setaf 7)\]"
color_gray="\[$(/usr/bin/tput setaf 8)\]"
color_off="\[$(/usr/bin/tput sgr0)\]"
color_error="$(/usr/bin/tput setab 1)$(/usr/bin/tput setaf 7)"
color_error_off="$(/usr/bin/tput sgr0)"
# set user color
case `id -u` in
0) color_user=$color_red ;;
*) color_user=$color_green ;;
esac
fi
function prompt_command {
# get cursor position and add new line if we're not in first column
exec < /dev/tty
local OLDSTTY=$(stty -g)
stty raw -echo min 0
echo -en "\033[6n" > /dev/tty && read -sdR CURPOS
stty $OLDSTTY
[[ ${CURPOS##*;} -gt 1 ]] && echo "${color_error}↵${color_error_off}"
# build b/w prompt for git and vertial env
[[ ! -z $GIT_BRANCH ]] && PS1_GIT=" (git: ${GIT_BRANCH})"
[[ ! -z $VIRTUAL_ENV ]] && PS1_VENV=" (venv: ${VIRTUAL_ENV#$WORKON_HOME})"
# calculate fillsize
local fillsize=$(($COLUMNS-$(printf "${USER}#${HOSTNAME}:${PWD}:${PWDNAME}${PS1_GIT}${PS1_VENV} " | wc -c | tr -d " ")))
local FILL=$color_white
while [ $fillsize -gt 0 ]; do FILL="${FILL}─"; fillsize=$(($fillsize-1)); done
FILL="${FILL}${color_off}"
# set new color prompt
PS1="${color_user}${USER}${color_off}#${color_yellow}${HOSTNAME}${color_off}:${color_white}${PWD}:${PWDNAME}${color_off}${PS1_GIT}${PS1_VENV} ${FILL}\n➜ "
}
PROMPT_COMMAND=prompt_command
After removing this configuration, erl started to work normally.
Related
This question is the opposite of this one (also asked here, here and here)
I have two versions of ruby installed
ubuntu:~/environment $ rvm list
ruby-2.6.6 [ x86_64 ]
=* ruby-3.0.2 [ x86_64 ]
and every time I open a terminal window, ruby-3.0.2 (the default) is set. The problem is for a couple of my older projects I have to use ruby-2.6.6, so every time I have to switch with
rvm use 2.6.6
Is there a way to automatically select ruby 2.6.6 when I open the terminal window of the specific projects? I have tried to override the default rvm version with the .ruby-version file (as suggested here) but it does not do the trick.
EDIT File /home/ubuntu/.rvm/scripts/cd contains the following
#!/usr/bin/env bash
# Source a .rvmrc file in a directory after changing to it, if it exists. To
# disable this feature, set rvm_project_rvmrc=0 in /etc/rvmrc or $HOME/.rvmrc
case "${rvm_project_rvmrc:-1}" in
1|cd)
# cloned from git#github.com:mpapis/bash_zsh_support.git
source "$rvm_scripts_path/extras/bash_zsh_support/chpwd/function.sh"
# not using default loading to support older Zsh
[[ -n "${ZSH_VERSION:-}" ]] &&
__rvm_version_compare "$ZSH_VERSION" -gt 4.3.4 ||
{
function cd() { __zsh_like_cd cd "$#" ; }
function popd() { __zsh_like_cd popd "$#" ; }
function pushd() { __zsh_like_cd pushd "$#" ; }
}
__rvm_after_cd()
{
\typeset rvm_hook
rvm_hook="after_cd"
if [[ -n "${rvm_scripts_path:-}" || -n "${rvm_path:-}" ]]
then source "${rvm_scripts_path:-$rvm_path/scripts}/hook"
fi
}
__rvm_cd_functions_set()
{
__rvm_do_with_env_before
if [[ -n "${rvm_current_rvmrc:-""}" && "$OLDPWD" == "$PWD" ]]
then rvm_current_rvmrc=""
fi
__rvm_project_rvmrc >&2 || true
__rvm_after_cd || true
__rvm_do_with_env_after
return 0
}
[[ " ${chpwd_functions[*]} " == *" __rvm_cd_functions_set "* ]] ||
chpwd_functions=( "${chpwd_functions[#]}" __rvm_cd_functions_set )
# This functionality is opt-in by setting rvm_cd_complete_flag=1 in ~/.rvmrc
# Generic bash cd completion seems to work great for most, so this is only
# for those that have some issues with that.
if (( ${rvm_cd_complete_flag:-0} == 1 ))
then
# If $CDPATH is set, bash should tab-complete based on directories in those paths,
# but with the cd function above, the built-in tab-complete ignores $CDPATH. This
# function returns that functionality.
_rvm_cd_complete ()
{
\typeset directory current matches item index sep
sep="${IFS}"
export IFS
IFS=$'\n'
COMPREPLY=()
current="${COMP_WORDS[COMP_CWORD]}"
if [[ -n "$CDPATH" && ${current:0:1} != "/" ]] ; then
index=0
# The change to IFS above means that the \command \tr below should replace ':'
# with a newline rather than a space. A space would be ignored, breaking
# TAB completion based on CDPATH again
for directory in $(printf "%b" "$CDPATH" | \command \tr -s ':' '\n') ; do
for item in $( compgen -d "$directory/$current" ) ; do
COMPREPLY[index++]=${item#$directory/}
done
done
else
COMPREPLY=( $(compgen -d ${current}) )
fi
IFS="${sep}";
}
complete -o bashdefault -o default -o filenames -o dirnames -o nospace -F _rvm_cd_complete cd
fi
;;
2|prompt)
if
[[ -n "${ZSH_VERSION:-}" ]]
then
precmd_functions+=(__rvm_do_with_env_before __rvm_project_rvmrc __rvm_do_with_env_after)
else
PROMPT_COMMAND="${PROMPT_COMMAND%% }"
PROMPT_COMMAND="${PROMPT_COMMAND%%;}"
PROMPT_COMMAND="${PROMPT_COMMAND:-}${PROMPT_COMMAND:+; }__rvm_do_with_env_before; __rvm_project_rvmrc; __rvm_do_with_env_after"
fi
;;
esac
You are probably using RVM as a shell script, and not as a shell function.
You can check like this in a typical shell (bash, zsh, ...) : execute: type rvm
If it displays rvm is /home/ying/.rvm/bin/rvm : you are using as a script (found in $PATH)
If it displays rvm is a function : you are using as a function (much better).
Check out: https://rvm.io/rvm/basics#post-install-configuration
If you are using as a script, and want to use as a function: you need to "source" the rvm function, it is located in <rvm main folder>/scripts/rvm, for instance if installed in $HOME:
source $HOME/.rvm/scripts/rvm
source /usr/local/rvm/scripts/rvm
Typically, at RVM installation time, it add the following line in the equivalent of .profile (depending on shell and if its global or user):
[[ -s "$HOME/.rvm/scripts/rvm" ]] && source "$HOME/.rvm/scripts/rvm" # Load RVM into a shell session *as a function*
RVM changes automatically the version as described here:
https://rvm.io/workflow/projects
For detection of .ruby-version, the following is required:
RVM must be a recent version that supports the feature
RVM must be loaded in the shell (typically by .profile, or equivalent) so that is is executed as a function
the shell must be compatible with this callback feature (bash and zsh are)
Here is what happens:
When you load rvm as a function, it registers callback in the shell
when you cd into the project, RVM callbacks (in shell) detect the file .ruby-version (or others) and automatically do the equivalent of rvm use.
For instance, I use zsh (on osx) which has preexec and precmd callbacks
and it detect the ruby version file and applies when I cd into it or a sub folder.
It works with bash too.
If you are curious or want to see why it does not work for you look at the file <rvm main dir>/scripts/cd
typically the shell variable chpwd_functions is set to __rvm_cd_functions_set
which is the function called after a cd by rvm
I made a bash script for Nagios to test with Nagiosgraph. Rrd files are however not being created for this script. Default plugins that come with Nagios work well with Nagiosgraph and rrd files of those plugins are also present.
Here is the script:
#!/bin/bash
checkgpu=$( nvidia-smi --format=csv --query-gpu=utilization.gpu | awk '/[[:digit:]]+[[:space:]]%/ { tot+=$1;cnt++ } END { print tot/cnt }' | cut -d$
output="Load Average: $checkgpu"
if [ $checkgpu -ge 0 ]
then
echo "OK- $output"
exit 0
elif [ $checkgpu -eq 101 ]
then
echo "WARNING- $output"
exit 1
elif [ $checkgpu -eq 102 ]
then
echo "CRITICAL- $output"
exit 2
else
echo "UNKNOWN- $output"
exit 3
fi
What should i do to make this script work with Nagiosgraph/Performance data ?
Have a look at the development guidelines: https://nagios-plugins.org/doc/guidelines.html#AEN200
The expected format for perfdata is 'label'=value[UOM];[warn];[crit];[min];[max] which can look something like this:
PING ok - Packet loss = 0%, RTA = 0.80 ms | percent_packet_loss=0, rta=0.80
The pipe (|) character tells Nagios that the plugin output has ended and performance data starts.
Note that the above example does not specify UOM (unit of measurement, like percent), nor does it specify any warn/crit thresholds for the data, or min/max values for the graphs. These are all optional.
I get the following collision error when attempting to build an environment that, as far as I can see, shouldn't have a collision (in this case, scala-env depends on ideaLocal, so it shouldn't conflict with it):
...
idea-IU-172.4155.36/bin/libyjpagent-linux.so
idea-IU-172.4155.36/bin/libyjpagent-linux64.so
idea-IU-172.4155.36/help/ideahelp.jar
idea-IU-172.4155.36/lib/libpty/linux/x86/libpty.so
idea-IU-172.4155.36/lib/libpty/linux/x86_64/libpty.so
idea-IU-172.4155.36/bin/format.sh
idea-IU-172.4155.36/bin/fsnotifier
idea-IU-172.4155.36/bin/fsnotifier-arm
idea-IU-172.4155.36/bin/fsnotifier64
idea-IU-172.4155.36/bin/idea.sh
idea-IU-172.4155.36/bin/inspect.sh
idea-IU-172.4155.36/bin/printenv.py
idea-IU-172.4155.36/bin/restart.py
building path(s) ‘/nix/store/29g92lnpi0kywy9x7vcgl9yivwa2blm6-scala-env’
created 696 symlinks in user environment
building path(s) ‘/nix/store/qrnbff8nhpmxlzkmv508aymz5razbhgf-user-environment’
Wide character in die at /nix/store/64jc9gd2rkbgdb4yjx3nrgc91bpjj5ky-buildenv.pl line 79.
collision between ‘/nix/store/75sz9nklqmrmzxvf0faxmf6zamgaznfv-idea-local/bin/idea’ and ‘/nix/store/29g92lnpi0kywy9x7vcgl9yivwa2blm6-scala-env/bin/idea’; use ‘nix-env --set-flag priority NUMBER PKGNAME’ to change the priority of one of the conflicting packages
builder for ‘/nix/store/8hp5kdicxy9i02fa07vx85p1gvh4i1bq-user-environment.drv’ failed with exit code 255
error: build of ‘/nix/store/8hp5kdicxy9i02fa07vx85p1gvh4i1bq-user-environment.drv’ failed
Here is the nix expression (most of which can be ignored, but it isn't too long so I'll paste the whole thing):
with import <nixpkgs> { };
let
ideaLocal = stdenv.mkDerivation {
name = "idea-local";
buildInputs = [ ];
builder = builtins.toFile "builder.sh" ''
source $stdenv/setup
mkdir -p $out/bin
tar zxvf $src -C $out/
ln -sf $out/idea-IU* $out/idea
ln -sf $out/idea/bin/idea.sh $out/bin/idea
'';
shellHook = ''
IDEA_JDK=/usr/lib/jvm/zulu-8-amd64
'';
src = fetchurl {
url = https://download.jetbrains.com/idea/ideaIU-2017.2.4-no-jdk.tar.gz;
sha256 = "15a4799ffde294d0f2fce0b735bbfe370e3d0327380a0efc45905241729898e3";
};
priority = 5;
};
in
buildEnv {
name = "scala-env";
paths = [
ammonite
boehmgc
clang
dbus # needed non-explicitly by vscode
emacs
git
# idea.idea-ultimate # disabled temporarily
ideaLocal
less
libunwind
openjdk
openssh
re2
rsync
sbt
stdenv
syncthing # for syncrhonizing data between containers
tmux
unzip
vscode
zlib
];
# builder = builtins.toFile "builder.sh" ''
# source $stdenv/setup
# mkdir -p $out
# echo "" > $out/Done
# echo "Done setting up Scala environment."
# '';
buildInputs = [ makeWrapper ];
# TODO: better filter, use ammonite script?:
postBuild = ''
for f in $(ls -d $out/bin/* | grep "idea"); do
sed -i '/IDEA_JDK/d' $f
wrapProgram $f \
--set IDEA_JDK "/usr/lib/jvm/zulu-8-amd64" \
--set CLANG_PATH "${clang}/bin/clang" \
--set CLANCPP_PATH "${clang}/bin/clang++"
done
'';
}
Edit:
(DevContainer)which idea
/home/brandon/.nix-profile/bin/idea
(DevContainer)ls -last /home/brandon/.nix-profile/bin/idea
4 lrwxrwxrwx 1 brandon brandon 63 Jan 1 1970 /home/brandon/.nix-profile/bin/idea -> /nix/store/75sz9nklqmrmzxvf0faxmf6zamgaznfv-idea-local/bin/idea
So it looks like ideaLocal is being imported as an environment - what's the right way to just have it installed as a package that is a dependency of scalaEnv?
Apparently the solution was to specify the profile, which I think of as the name of the environment, so that both environments aren't installed simultaneously:
nix-env -if scala-default.nix -p scala-env
I have the following piece of code, which works as expected. It ensures that 2 processes are always spawned, and if any process fails, the script comes to a halt.
I have worked with GNU parallel earlier on simple one line scripts and they have worked really well.I'm sure the one below too can be made simpler.
The sleeper function in reality is MUCH more complex than one shown below.
The objective is that GNU parallel will call sleeper function in parallel and also do error handling
`sleeper(){
stat=$1
sleep 5
echo "Status is $1"
return $1
}
PROCS=2
errfile="errorfile"
rm "$errfile"
while read LINE && [ ! -f "$errfile" ]
do
while [ ! -f "$errfile" ]
do
NUM=$(jobs | wc -l)
if [ $NUM -lt $PROCS ]; then
(sleeper $LINE || echo "bad exit status" > "$errfile") &
break
else
sleep 2
fi
done
done<sleep_file
wait`
Thanks
What you are looking for is --halt (requires version 20150622):
sleeper(){
stat=$1
sleep 5
echo "Status is $1"
return $1
}
export -f sleeper
parallel -j2 --halt now,fail=1 -v sleeper ::: 0 0 0 1 0 1 0
If you do not want the sleeper to get killed (maybe you want it to finish so it cleans up), then use --halt soon,fail=1 to let the running jobs complete without starting new ones.
I've got a small script called "onewhich". Its purpose is to behave like which, except that it will only give the FIRST occurrence of any executables specified as options, as found in the order they'd appear in the path.
So for example, if my path is /opt/bin:/usr/bin:/bin, and I have both /opt/bin/runme and /usr/bin/runme, then the command onewhich runme would return /opt/bin/runme.
But if I also have a /usr/bin/doit, then the command onewhich doit runme would return /usr/bin/doit instead.
The idea is to walk through the path, check for each executable specified, and if it exists, show it and exit.
Here's the script so far.
#!/bin/sh
for what in "$#"; do
for loc in `echo "${PATH}" | awk -vRS=: 1`; do
if [ -f "${loc}/${what}" ]; then
echo "${loc}/${what}"
exit 0
fi
done
done
exit 1
The problem is, I want to be better about PATH directories with special characters. Every second shell question here on StackOverflow talks about how bad it is to parse paths with tools like awk and sed. There's even a bash faq entry about it. (Proviso: I'm not using bash for this, but the recommendation is still valid.)
So I tried rewriting the script to separate paths in a pipe, like this"
#!/bin/sh
for what in "$#"; do
echo "${PATH}" | awk -vRS=: 1 | while read loc ; do
if [ -f "${loc}/${what}" ]; then
echo "${loc}/${what}"
exit 0
fi
done
done
exit 1
I'm not sure if this gives me any real advantage (since $loc is still inside quotes), but it also doesn't work because for some reason, the exit 0 seems to be ignored. Or ... it exits something (the sub-shell with the while loop that terminates the pipe, maybe), but the script exits with a value of 1 every time.
What's a better way to step through directories in ${PATH} without the risk that special characters will confuse things?
Alternately, am I reinventing the wheel? Is there maybe a way to do this that's built in to existing shell tools?
This needs to run in both Linux and FreeBSD, which is why I'm writing it in Bourne instead of bash.
Thanks.
This doesn't directly answer your question, but does eliminate the need to parse PATH at all:
onewhich () {
for what in "$#"; do
which "$what" 2>/dev/null && break
done
}
This just calls which on each command on the input list until it finds a match.
To parse PATH, you can simply set `IFS=':'.
if [ "${IFS:-x}" = "${IFS-x}" ]; then
# Only preserve the value of IFS if it is currently set
OLDIFS=$IFS
fi
IFS=":"
for f in $PATH; do # Do not quote $PATH, to allow word splitting
echo $f
done
if [ "${OLDIFS:-x}" = "${OLDIFS-x}" ]; then
IFS=$OLDIFS
fi
The above will fail if any of the directories in PATH actually contain colons.
Your first method looks to me as if it should work. In practical terms, if it's really the $PATH you'll be searching, it's unlikely you'll have spaces and newlines embedded in directories there. If you do, it's probably time to refactor.
But still, I don't think you're at risk from the possibility of bad names clobbering your loop, since you're wrapping variables in quotes. At worst, I suspect you might miss the odd valid executable, but I can't see how the script would generate errors. (I don't see how the script would miss valid executables, and I haven't tested - I'm just saying I don't see problems at first glance.)
As for your second question, about the loop, I think you've hit the nail on the head. When you run a pipe like this | that | while condition; do things; done, the while loop runs in its own shell at the end of the pipe. Exiting that shell may terminate the actions of the pipe, but that only brings you back to the parent shell, which has its own thread of execution that terminates with exit 1.
As for a better way to do this, I would consider which.
#!/bin/sh
for what in "$#"; do
which "$what"
done | head -1
And if you really want the exit values as well:
#!/bin/sh
for what in "$#"; do
which "$what" && exit 0
done
exit 1
The second might even be fewer resources, as it doesn't have to open a file handle and pipe through head.
You can also split your path using IFS. For example, if you wanted to wrap your loops the other way around, you could do this:
#!/bin/sh
IFS=":"
for loc in $PATH; do
for what in "$#"; do
if [ -x "$loc"/"$what" ]; then
echo "$loc"/"$what"
exit 0
fi
done
done
exit 1
Note that under normal circumstances, you might want to save the old value of $IFS, but you seem to be doing things in a stand-alone script, so the "new" value gets thrown out when the script exits.
All the above code is untested. YMMV.
Another way to get around the need to parse PATH at all is to run the builtin type command in new shell with a stripped environment (i. e. there simply are no functions or aliases to look up; cf. env -i sh -c 'type cmd 2>/dev/null).
# using `cmd` instead of $(cmd) for portability
onewhich() {
ec=0 # exit code
for cmd in "$#"; do
command -p env -i PATH="$PATH" sh -c '
export LC_ALL=C LANG=C
cmd="$1"
path="`type "$cmd" 2>/dev/null`"
if [ X"$path" = "X" ]; then
printf "%s\n" "error: command \"${cmd}\" not found in PATH" 1>&2
exit 1
else
case "$path" in
*\ /*)
path="/${path#*/}"
printf "%s\n" "$path";;
*)
printf "%s\n" "error: no disk file: $path" 1>&2
exit 1;;
esac
exit 0
fi
' _ "$cmd"
[ $? != 0 ] && ec=1
done
[ $ec != 0 ] && return 1
}
onewhich awk ls sed
onewhich builtin
onewhich if
Since which on success returns two full command paths if two commands are specified as arguments, exit 0 in the first onewhich script above aborts the program prematurely. In addition, if two commands are specified as arguments to which, the exit code of which is set to 1 even if only one command lookup failed (cf. which awk sedxyz ls; echo $?). To mimic this behaviour of the which command it is necessary to toggle on/off two variables (cnt and nomatches below).
onewhich() (
IFS=":"
nomatches=0
for cmd in "$#"; do
cnt=0
for loc in $PATH ; do
if [ $cnt = 0 ] && [ -x "$loc"/"$cmd" ]; then
echo "$loc"/"$cmd"
cnt=1
fi
done
[ $cnt = 0 ] && nomatches=1
done
[ $nomatches = 1 ] && exit 1 || exit 0 # exit 1: at least one cmd was not in PATH
)
onewhich awk ls sed
onewhich awk lsxyz sed
onewhich builtin
onewhich if