Alpine not loading /etc/profile [duplicate] - docker

I'm trying to write (what I thought would be) a simple bash script that will:
run virtualenv to create a new environment at $1
activate the virtual environment
do some more stuff (install django, add django-admin.py to the virtualenv's path, etc.)
Step 1 works quite well, but I can't seem to activate the virtualenv. For those not familiar with virtualenv, it creates an activate file that activates the virtual environment. From the CLI, you run it using source
source $env_name/bin/activate
Where $env_name, obviously, is the name of the dir that the virtual env is installed in.
In my script, after creating the virtual environment, I store the path to the activate script like this:
activate="`pwd`/$ENV_NAME/bin/activate"
But when I call source "$activate", I get this:
/home/clawlor/bin/scripts/djangoenv: 20: source: not found
I know that $activate contains the correct path to the activate script, in fact I even test that a file is there before I call source. But source itself can't seem to find it. I've also tried running all of the steps manually in the CLI, where everything works fine.
In my research I found this script, which is similar to what I want but is also doing a lot of other things that I don't need, like storing all of the virtual environments in a ~/.virtualenv directory (or whatever is in $WORKON_HOME). But it seems to me that he is creating the path to activate, and calling source "$activate" in basically the same way I am.
Here is the script in its entirety:
#!/bin/sh
PYTHON_PATH=~/bin/python-2.6.1/bin/python
if [ $# = 1 ]
then
ENV_NAME="$1"
virtualenv -p $PYTHON_PATH --no-site-packages $ENV_NAME
activate="`pwd`/$ENV_NAME/bin/activate"
if [ ! -f "$activate" ]
then
echo "ERROR: activate not found at $activate"
return 1
fi
source "$activate"
else
echo 'Usage: djangoenv ENV_NAME'
fi
DISCLAIMER: My bash script-fu is pretty weak. I'm fairly comfortable at the CLI, but there may well be some extremely stupid reason this isn't working.

If you're writing a bash script, call it by name:
#!/bin/bash
/bin/sh is not guaranteed to be bash. This caused a ton of broken scripts in Ubuntu some years ago (IIRC).
The source builtin works just fine in bash; but you might as well just use dot like Norman suggested.

In the POSIX standard, which /bin/sh is supposed to respect, the command is . (a single dot), not source. The source command is a csh-ism that has been pulled into bash.
Try
. $env_name/bin/activate
Or if you must have non-POSIX bash-isms in your code, use #!/bin/bash.

In Ubuntu if you execute the script with sh scriptname.sh you get this problem.
Try executing the script with ./scriptname.sh instead.

best to add the full path of the file you intend to source.
eg
source ./.env instead of source .env
or source /var/www/html/site1/.env

Related

nix-env and nix-build not found after installation (debian buster)

after the installation following the instructions with
curl https://nixos.org/nix/install | sh
and logout/login, nix-env and nix-build are not found.
I had the problem with debian stretch and now with buster. What am I doing wrong?
The nix manual instructs to execute
source ~/.nix-profile/etc/profile.d/nix.sh
but the instructions printed after the execution say to do (I do not remember exactly)
./~/.nix-profile/etc/profile.d/nix.sh
and the same command is inserted into ~/.profile. The cause of the problem is the difference between . and source (see this superuser question). The script is setting up the $PATH variable in the environment and has the desired effect wtih source but no effect with . (which operates in its own shell and closes it at the end).
Cure:
change the line in .profile (or better move it to .bashrc) to
if [ -e /home/xxx/.nix-profile/etc/profile.d/nix.sh ]; then source /home/xxx/.nix-profile/etc/profile.d/nix.sh; fi
(xxx is your user name),
You need to add this recommended script.
For me only setting $PATH like this worked (in .profile)
export PATH="$PATH:/nix/var/nix/profiles/default/bin"

How to run repo from a script inside a container in a jenkins job

I am unable to run repo non-interactively inside a container as part of a freestyle job.
It prompts for the user-name and email. I got round that by doing a git config --global inside the job.
But then it does the color test, and that hangs indefinitely.
Looking at the source code for repo I see this
if os.isatty(0) and os.isatty(1) and not self.manifest.IsMirror:
if opt.config_name or self._ShouldConfigureUser():
self._ConfigureUser()
self._ConfigureColor()
So, I ran the following inside the container:
python -C "import os; print os.isatty(0), os.isatty(1)"
and, sure enough, it printed out True True
Looking at the Jenkins log, it launches the container with --tty specified, and there seems no way to configure that option.
I can't find a bash option to force a script to be run in a non-interactive shell. If I put the above python line in a file and execute it with almost any combination of commands and options, it still prints out True True
The only way I see something different is if I use I/O redirection
bash <a.sh
which prints out False True - i.e. stdin is not a tty, and
bash <a.sh >a.log
which prints False False.
For a complex script, are there any problems using the bash <script approach?
Does anyone know any jenkins magic to prevent docker being launched using --tty?
I know that the --tty is the culprit. I built the container locally and ran the following
$ docker run repotest python -c "import os;print os.isatty(0), os.isatty(1)"
False False
$ docker run --tty repotest python -c "import os;print os.isatty(0), os.isatty(1)"
True True
Running Versions:
repo: 1.12.37 (per Ubuntu 16.04 apt-get)
Jenkins: 2.149
Cloudbees Docker Plugin: 1.7.3
Container base is ubuntu:xenial
I'm using the "Build inside a docker container" option.
To run bash script repo_script.sh "non-interactively", or more exactly speaking without having terminals associated with standard streams, you could run your script simply as
repo_script.sh < /dev/null 2>&1 | cat
assuming you want to see the output the way you would see it running simply as repo_script.sh. By piping the standard output and error to a different process the file descriptor appears as a pipe and not TTY to repo_script.sh. You could also direct output to a file, or even to /dev/null if you do not care about the output:
log_file=/dev/null
repo_script.sh < /dev/null > "${log_file}" 2>&1
Running the script as
bash < repo_script.sh | cat
might would work too, though it is very unorthodox and to my mind hackish way of running a script just to break the association of TTY to the standard input. From script engine point of view, it is different to read a script program from a file than from standard input (which typically, if it is a terminal, is not seekable), so there might be some subtle differences that could possibly bite you in unexpected ways. This way does not as clearly communicate your intention to the next person that need to understand your code, and may lead to partial hair loss in that person due to extraneous head scratching.
There is no need for any bash options, just using the output directions from within the interpreting shell as above described is an easy-to-comprehend, multi-platform compatible standard convention for changing the standard stream associations.
P.S. I think it should be enough for your repo script to just test if the standard input is a TTY. It looks to me like the author of that script did not think deeply enough there. There is simply no use waiting for input if you do not have terminal device associated with standard input, and you could determine that everything needs to run without user interaction from there or stop with an error if that is not possible.

where the $PATH is created in ubuntu (16.04) and how to change it

when checking my $PATH on ubuntu (16.04)
I get a long list of directories, few of which even do not exist in my file
system, and some of them I just don't need:
echo $PATH
.../usr/games:/usr/local/games:/snap/bin
where they are created and how can I remove them?
I wnant to control the creation of the $PATH, rather than
correct it later by the tricks described in
https://unix.stackexchange.com/questions/108873/removing-a-directory-from-path
Some typical places where $PATH can be set when starting up a bash shell on Ubuntu include:
/etc/profile
~/.profile
~/.bashrc
where ~ represents your home directory.
Also look at any scripts called by those scripts.
There may be other things that get called when starting up a bash shell, depending on various conditions. For details, take a look at the INVOCATION section from the command:
$ man bash
See this answer from askubuntu.com to edit the path either using a text editor or the command line.
I found the answer to your question today. The path you want to edit is in /etc/environment.

What's the difference between RUN and bash script in a dockerfile?

I've seen many dockerfiles include all build steps in a RUN statement, like:
RUN echo "Hello" &&
cd /tmp &&
mv a.txt b.txt &&
...
and so on...
My question is: what's the benefits/drawbacks on replace these instructions by a single bash script that gives me highlight syntax, loop capabilities, etc?
Something like:
COPY ./script.sh /tmp
RUN bash /tmp/script.sh
and then
#!/bin/bash
echo "hello" ;
cd /tmp ;
mv a.txt b.txt ;
...
Thanks!
The primary difference is that when you COPY the bash script into the image it will be available for inspection in the running container, whereas the RUN command is a little more opaque. Putting your commands in a file like that is arguably more manageable for other reasons: changes in your VCS history will be a little more clear, and for longer or more complex scripts you will probably find it easier to format things cleanly with the script in a separate file rather than embedded in your Dockerfile in a RUN command.
Otherwise the result is the same (in both cases, you are executing the same set of commands), although the COPY and RUN will result in an extra image layer (vs. just the RUN by itself).
I guess running it off as a shell script gives you more control.
For instance, you can do if-else statements to check whether a command has failed or not and provide a code path to handle it. Whereas RUN is more straight forward and when the return code is not 0 it fails the build immediately.
Obviously the case you have there is a relatively simple one and it would not have had a huge difference. The only impact I can see here is the code readability aspect. Someone would have to read the shell script to know what is happening, comparing to having everything on a single file.
I guess it all comes down to using the right tool for the right job. If it is a simple command and you don't need complex logic handling then do RUN.

Appending to PATH in a Windows Docker container

I need to append to the PATH within a Windows Docker container, and I've tried many permutations.
ENV PATH=%PATH%;C:\\Foo\\bin
ENV PATH=$PATH;C:\\Foo\\bin
ENV PATH="%PATH%;C:\Foo\bin"
ENV PATH="$PATH;C:\Foo\bin"
RUN "set PATH=%PATH%;C:\Foo\bin"
None of these work: they don't evaluate the preexisting PATH variable.
What is the right syntax to append to the PATH? Can I even append to the PATH inside Docker? (I can on similar Linux containers)
Unfortunately ENV won't work, because windows environment variable work a little differently than linux. more info
As of now the only way to do this is through RUN
But you don't need to create a separate file to do this. This can be done by the following much simpler one line command:
RUN setx path "%path%;C:\Foo\bin"
You can set environment variables permanently in the container using a powershell script.
Create a powershell script in yout docker context (e.g. setpath.ps1 ) containing this:
[Environment]::SetEnvironmentVariable("Path", $env:Path + ";C:\Foo\bin", [EnvironmentVariableTarget]::Machine)
Add this script to your container dockerfile and RUN the script. Add something like this to your dockerfile:
ADD ./setpath.ps1 c:/MyScripts/setpath.ps1
RUN powershell -Command c:\MyScripts\setpath.ps1
[Environment]::SetEnvironmentVariable is a good way, but will not work in nanoserver.
The best choice is:
RUN setx path '%path%;C:\Foo\bin'
This worked for me:
USER ContainerAdministrator
RUN setx /M PATH "%PATH%;C:/your/path"
USER ContainerUser
As seen in the .net sdk Dockerfile: https://github.com/dotnet/dotnet-docker/blob/20ea9f045a8eacef3fc33d41d58151d793f0cf36/2.1/sdk/nanoserver-1909/amd64/Dockerfile#L28-L29
Despite all previous answers, I've faced an issue in some environments. Basically on a custom local test environment the setx using the %PATH%;C:\foo\bar way works even when the folder has spaces like C:\Program Files. That though didn't work when trying it on our production environment.
Checking what Microsoft do when they install the base packages on their own images it turns out a better and more reliable way is to use the command this way:
RUN setx /M PATH $(${Env:PATH} + \";${Env:ProgramFiles(x86)}\foo\bar\")
This way docker will be able to get the proper paths and properly update the PATH data.
Edit:
I've fixed the missing trailing \ in the command, thanks to Robin Ding :)
The following works for me in nanoserver-1809 (from this GitHub issue):
ENV PATH="$WindowsPATH;C:\Foo\bin"
My answer is similar to mirsik's but has no need for a separate script. Just put this in your Dockerfile
RUN $env:PATH = 'C:\Foo\bin;{0}' -f $env:PATH ; \
[Environment]::SetEnvironmentVariable('PATH', $env:PATH, [EnvironmentVariableTarget]::Machine)

Resources