Environment variable in GitBash not set - environment-variables

I'm trying to set the environment variable within my mingw gitbash (windows7-x64) via a small bash script. But it doesn't get set, only if I execute it manually.
contents of dev.bash
schwat#AACarrier MINGW64 ~/Documents/test
$ cat dev.bsh
#!/usr/bin/env sh
export KUBECONFIG="/c/Users/schwat/Documents/test/.kube/dev.kubecfg"
kubectl config set-context dev --cluster=kubernetes --namespace=dev --user=admin
kubectl config use-context dev
echo "Connected to ENV:DEV"
executed dev.bsh and echo of $KUBECONFIG
schwat#AACarrier MINGW64 ~/Documents/test
$ ./dev.bsh
Context "dev" modified.
Switched to context "dev".
Connected to ENV:DEV
schwat#AACarrier MINGW64 ~/Documents/test
$ echo $KUBECONFIG
exporting KUBECONFIG manually and echo of $KUBECONFIG
schwat#AACarrier MINGW64 ~/Documents/test
$ export KUBECONFIG="/c/Users/schwat/Documents/test/.kube/dev.kubecfg"
schwat#AACarrier MINGW64 ~/Documents/test
$ echo $KUBECONFIG
/c/Users/schwat/Documents/test/.kube/dev.kubecfg
Any idea what's wrong here? (not a duplicate of: Set an environment variable in git bash)

I see two main points in your script:
First you are using sh instead of bash in your script
#!/usr/bin/env sh
#!/usr/bin/env bash
And the second point that I see is related to the understanding of the export in a script.
When you execute a script a new process is created so the variables you create and export is available to this new process and to all possibles sub-process, and not to the parent process, in this case the shell which you call your script.
So, you variable is probably being create but when the script finish it is destroyed and you cannot see it anymore.
Hope it helps!

Related

How to see the PATH inside a shell without opening a shell

Use the command flag looked like a solution but it doesn't work
Inside the following shell:
nix shell github:nixos/nixpkgs/nixpkgs-unstable#hello
the path contain a directory with an executable hello
I've tried this:
nix shell github:nixos/nixpkgs/nixpkgs-unstable#hello --command echo $PATH
I can't see the hello executable
My eyes are not the problem.
diff <( echo $PATH ) <( nix shell github:nixos/nixpkgs/nixpkgs-unstable#hello --command echo $PATH)
It see no difference. It means that the printed path doesn't not contains hello.
Why?
The printed path does not contain hello because if your starting PATH was /nix/var/nix/profiles/default/bin:/run/current-system/sw/bin, then you just ran:
nix shell 'github:nixos/nixpkgs/nixpkgs-unstable#hello' --command \
echo /nix/var/nix/profiles/default/bin:/run/current-system/sw/bin
That is to say, you passed your original path as an argument to the nix shell command, instead of passing it a reference to a variable for it to expand later.
The easiest way to accomplish what you're looking for is:
nix shell 'github:nixos/nixpkgs/nixpkgs-unstable#hello' --command \
sh -c 'echo "$PATH"'
The single quotes prevent your shell from expanding $PATH before a copy of sh invoked by nix is started.
Of course, if you really don't want to start any kind of child shell, then you can run a non-shell tool to print environment variables:
nix shell 'github:nixos/nixpkgs/nixpkgs-unstable#hello' --command \
env | grep '^PATH='

set environment variable from sh script in systemd service file

i try to use ready-made bash script that set env
this is the service that i try to use :
[Unit]
Description=myserver service
After=multi-user.target
[Service]
Type=simple
User=ec2-user
Group=ec2-user
WorkingDirectory=/home/ec2-user/myserver/
ExecStart=/bin/sh -c '/home/ec2-user/myserver/config/myserverVars.sh ;/home/ec2-user/venv/bin/python /home/ec2-user/myserver/myserver.py 2>&1 >> /home/ec2-user/myserver/logs/systemd_myserver.log'
StandardOutput=append:/home/ec2-user/myserver/logs/systemd_stdout.log
StandardError=append:/home/ec2-user/myserver/logs/systemd_stderr.log
[Install]
WantedBy=multi-user.target
the myserverVars.sh:
#!/bin/bash
export APP1=foo#gmail.com
export APP2_BIND_PASS=xxxxxx
export APP3=xxxxxx
the variables in /home/ec2-user/myserver/config/myserverVars.sh
are never set, and the server is started without the variables and this is wrong ,
i trying to avoid using Environment key or EnvironmentFile
When you run /home/ec2-user/myserver/config/myserverVars.sh, it is run in a new process which exits when it finishes, so all the changes to the environment are lost. You need to ask the current shell to execute the script without starting a new process. This is done with the source command, which is also available as the "dot" command: .. So use
/bin/sh -c 'source /home/ec2-user/myserver/config/myserverVars.sh; ...'

How to get /etc/profile to run automatically in Alpine / Docker

How can I get /etc/profile to run automatically when starting an Alpine Docker container interactively? I have added some aliases to an aliases.sh file and placed it in /etc/profile.d, but when I start the container using docker run -it [my_container] sh, my aliases aren't active. I have to manually type . /etc/profile from the command line each time.
Is there some other configuration necessary to get /etc/profile to run at login? I've also had problems with using a ~/.profile file. Any insight is appreciated!
EDIT:
Based on VonC's answer, I pulled and ran his example ruby container. Here is what I got:
$ docker run --rm --name ruby -it codeclimate/alpine-ruby:b42
/ # more /etc/profile.d/rubygems.sh
export PATH=$PATH:/usr/lib/ruby/gems/2.0.0/bin
/ # env
no_proxy=*.local, 169.254/16
HOSTNAME=6c7e93ebc5a1
SHLVL=1
HOME=/root
TERM=xterm
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
/ # exit
Although the /etc/profile.d/rubygems.sh file exists, it is not being run when I login and my PATH environment variable is not being updated. Am I using the wrong docker run command? Is something else missing? Has anyone gotten ~/.profile or /etc/profile.d/ files to work with Alpine on Docker? Thanks!
The default shell in Alpine Linux is ash.
Ash will only read the /etc/profile and ~/.profile files if it is started as a login shell sh -l.
To force Ash to source the /etc/profile or any other script you want upon its invocation as a non login shell, you need to setup an environment variable called ENV before launching Ash.
e.g. in your Dockerfile
FROM alpine:3.5
ENV ENV="/root/.ashrc"
RUN echo "echo 'Hello, world!'" > "$ENV"
When you build that you get:
deployer#ubuntu-1604-amd64:~/blah$ docker build --tag test .
Sending build context to Docker daemon 2.048kB
Step 1/3 : FROM alpine:3.5
3.5: Pulling from library/alpine
627beaf3eaaf: Pull complete
Digest: sha256:58e1a1bb75db1b5a24a462dd5e2915277ea06438c3f105138f97eb53149673c4
Status: Downloaded newer image for alpine:3.5
---> 4a415e366388
Step 2/3 : ENV ENV "/root/.ashrc"
---> Running in a9b6ff7303c2
---> 8d4af0b7839d
Removing intermediate container a9b6ff7303c2
Step 3/3 : RUN echo "echo 'Hello, world!'" > "$ENV"
---> Running in 57c2fd3353f3
---> 2cee6e034546
Removing intermediate container 57c2fd3353f3
Successfully built 2cee6e034546
Finally, when you run the newly generated container, you get:
deployer#ubuntu-1604-amd64:~/blah$ docker run -ti test /bin/sh
Hello, world!
/ # exit
Notice the Ash shell didn't run as a login shell.
So to answer your query, replace
ENV ENV="/root/.ashrc"
with:
ENV ENV="/etc/profile"
and Alpine Linux's Ash shell will automatically source the /etc/profile script each time the shell is launched.
Gotcha: /etc/profile is normally meant to only be sourced once! So, I would advise that you don't source it and instead source a /root/.somercfile instead.
Source: https://stackoverflow.com/a/40538356
You still can try in your Dockerfile a:
RUN echo '\
. /etc/profile ; \
' >> /root/.profile
(assuming the current user is root. If not, replace /root with the full home path)
That being said, those /etc/profile.d/xx.sh should run.
See codeclimate/docker-alpine-ruby as an example:
COPY files /
With 'files/etc" including an files/etc/profile.d/rubygems.sh running just fine.
In the OP project Dockerfile, there is a
COPY aliases.sh /etc/profile.d/
But the default shell is not a login shell (sh -l), which means profile files (or those in /etc/profile.d) are not sourced.
Adding sh -l would work:
docker#default:~$ docker run --rm --name ruby -it codeclimate/alpine-ruby:b42 sh -l
87a58e26b744:/# echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/lib/ruby/gems/2.0.0/bin
As mentioned by Jinesh before, the default shell in Alpine Linux is ash
localhost:~$ echo $SHELL
/bin/ash
localhost:~$
Therefore simple solution is too add your aliases in .profile. In this case, I put all my aliases in ~/.ash_aliases
localhost:~$ cat .profile
# ~/.profile
# Alias
if [ -f ~/.ash_aliases ]; then
. ~/.ash_aliases
fi
localhost:~$
.ash_aliases file
localhost:~$ cat .ash_aliases
alias a=alias
alias c=clear
alias f=file
alias g=grep
alias l='ls -lh'
localhost:~$
And it works :)
I use this:
docker exec -it my_container /bin/ash '-l'
The -l flag passed to ash will make it behave as a login shell, thus reading ~/.profile

How to set environment variables in Codenvy terminal

I'm using Codenvy to install golang and as part of the process I'm setting environment variables. I can set the environment variables just fine during the docker build process, but when I start the resulting Codenvy terminal the environment variables aren't set. How can I have the environment variables that are set in the dockerfile be present in the resulting terminal?
As an example if I take this dockerfile:
FROM codenvy/python34
ENV GOPATH /tmp/application/gopath
ENV PATH $GOPATH:$GOPATH/bin:$PATH
CMD echo $PATH && sleep 1h
...then in the docker build output I see
[STDOUT] /tmp/application/gopath:/tmp/application/gopath/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
...but when I open the terminal and look at the $PATH I see...
user#6ec34a856f91:~$ echo $PATH
/usr/local/bin:/usr/bin:/bin:/usr/games
The answer was sent to me from the Codenvy Google Group...you need to add lines to your /home/user/.bashrc file. This gets run when your terminal starts.
RUN echo "export GOPATH=$GOPATH" >> /home/user/.bashrc
RUN echo "export PATH=$GOPATH/bin:$PATH" >> /home/user/.bashrc

Why the environment variables shown by shell and jenkins different?

I found the environment variables shown by shell and jenkins is different. When I see $PATH by echo, it shows as follows;
# cat /etc/passwd | grep jenkins
jenkins:x:998:997:Jenkins Continuous Integration Server:/var/lib/jenkins:/bin/bash
# su jenkins
bash-4.2$ echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin
However when I exec "echo $PATH" on Jenkins by (Build -> Execute shell), console log shows as follows;
[workspace] $ /bin/sh -xe /tmp/hudson6923847933544830986.sh
+ echo /sbin:/usr/sbin:/bin:/usr/bin
/sbin:/usr/sbin:/bin:/usr/bin
Finished: SUCCESS
Not only $PATH but many other variables are also different, but $PATH is important to execute command and I don't understand why they are not the same. Do you have any idea?
Maybe your user of shell and user who is building the job are not same.
run $ whoami command in your shell and jenkins (Build -> Execute shell).
You can also set PATH variable in Build -> Execute shell section or jenkins -> Manage Jenkins -> configure System -> Global properties section. check the Environment variables then Name: PATH, value: $PATH:/usr/local/sbin.

Resources