In my Dockerfile I have next command:
FROM perl:5.30.3-buster
...
RUN env && source $PERLBREW_ROOT/etc/bashrc && env && carton install
when source $PERLBREW_ROOT/etc/bashrc was executed the PERL5LIB is disappeared and carton can not find libraries indicated by PERL5LIB path
Output from command:
Step 18/22 : RUN env && source $PERLBREW_ROOT/etc/bashrc && env && carton install
---> Running in 358cb3aa042e
HOSTNAME=358cb3aa042e
PERL_VERSION=5.34.1
PERLBREW_HOME=/opt/perlbrew/home
PWD=/
HOME=/root
PERL5LIB=/opt/proj/lib:/opt/proj/local/lib/perl5:
SHLVL=1
PATH=/opt/perlbrew/bin:/opt/proj/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PERLBREW_ROOT=/opt/perlbrew
_=/usr/bin/env
HOSTNAME=358cb3aa042e
PERL_VERSION=5.34.1
PERLBREW_PATH=/opt/perlbrew/bin:/opt/perlbrew/perls/perl-5.34.1/bin
PERLBREW_HOME=/opt/perlbrew/home
PERLBREW_SHELLRC_VERSION=0.94
PWD=/
MANPATH=/opt/perlbrew/perls/perl-5.34.1/man:
PERLBREW_PERL=perl-5.34.1
HOME=/root
PERLBREW_VERSION=0.94
SHLVL=1
PATH=/opt/perlbrew/bin:/opt/perlbrew/perls/perl-5.34.1/bin:/opt/proj/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PERLBREW_ROOT=/opt/perlbrew
PERLBREW_MANPATH=/opt/perlbrew/perls/perl-5.34.1/man
_=/usr/bin/env
But running same command on host system does not wipe PERL5LIB:
(env | grep PERL) && source ~/perl5/perlbrew/etc/bashrc && (env | grep PERL)
PERLBREW_PATH=/home/kes/perl5/perlbrew/bin:/home/kes/perl5/perlbrew/perls/perl-5.30.3/bin
PERLBREW_HOME=/home/kes/.perlbrew
PERLBREW_SHELLRC_VERSION=0.89
PERLBREW_PERL=perl-5.30.3
PERLBREW=command perlbrew
PERLBREW_VERSION=0.89
PERL5LIB=xxx
PERLBREW_ROOT=/home/kes/perl5/perlbrew
PERLBREW_MANPATH=/home/kes/perl5/perlbrew/perls/perl-5.30.3/man
PERLBREW_PATH=/home/kes/perl5/perlbrew/bin:/home/kes/perl5/perlbrew/perls/perl-5.30.3/bin
PERLBREW_HOME=/home/kes/.perlbrew
PERLBREW_SHELLRC_VERSION=0.89
PERLBREW_PERL=perl-5.30.3
PERLBREW=command perlbrew
PERLBREW_VERSION=0.89
PERL5LIB=xxx
PERLBREW_ROOT=/home/kes/perl5/perlbrew
PERLBREW_MANPATH=/home/kes/perl5/perlbrew/perls/perl-5.30.3/man
Tell me please, what is the problem when running that command from Docker file while building container?
Thank you #ikegami
It was unset by ${PERLBREW_HOME:-}/init. This unset required when you switch between perls and do not want theirs libraries mixed with each other.
For example you are on perl-5.24 and have installed libraries into its PERL5LIB, then you switched to perl-5.28 but PERL5LIB still points to modules installed by different version of perl. This may cause coredumps.
Related
I'm using this container to set up X11 in GitPod.
ARG base
FROM ${base}
# Dazzle does not rebuild a layer until one of its lines are changed. Increase this counter to rebuild this layer.
ENV TRIGGER_REBUILD=1
# Install Xvfb, JavaFX-helpers and Openbox window manager
RUN sudo install-packages xvfb x11vnc xterm openjfx libopenjfx-java openbox
# Overwrite this env variable to use a different window manager
ENV WINDOW_MANAGER="openbox"
USER root
# Change the default number of virtual desktops from 4 to 1 (footgun)
RUN sed -ri "s/<number>4<\/number>/<number>1<\/number>/" /etc/xdg/openbox/rc.xml
# Install novnc
RUN git clone --depth 1 https://github.com/novnc/noVNC.git /opt/novnc \
&& git clone --depth 1 https://github.com/novnc/websockify /opt/novnc/utils/websockify
COPY novnc-index.html /opt/novnc/index.html
# Add VNC startup script
COPY start-vnc-session.sh /usr/bin/
RUN chmod +x /usr/bin/start-vnc-session.sh
USER gitpod
# This is a bit of a hack. At the moment we have no means of starting background
# tasks from a Dockerfile. This workaround checks, on each bashrc eval, if the X
# server is running on screen 0, and if not starts Xvfb, x11vnc and novnc.
RUN echo "export DISPLAY=:0" >> /home/gitpod/.bashrc.d/300-vnc
RUN echo "[ ! -e /tmp/.X0-lock ] && (/usr/bin/start-vnc-session.sh &> /tmp/display-\${DISPLAY}.log)" >> /home/gitpod/.bashrc.d/300-vnc
USER root
### checks ###
# no root-owned files in the home directory
RUN notOwnedFile=$(find . -not "(" -user gitpod -and -group gitpod ")" -print -quit) \
&& { [ -z "$notOwnedFile" ] \
|| { echo "Error: not all files/dirs in $HOME are owned by 'gitpod' user & group"; exit 1; } }
USER gitpod
This is where it gets sketchy :
# Install novnc
RUN git clone --depth 1 https://github.com/novnc/noVNC.git /opt/novnc \
&& git clone --depth 1 https://github.com/novnc/websockify /opt/novnc/utils/websockify
COPY novnc-index.html /opt/novnc/index.html
I get this output please help !
COPY failed: file not found in build context or excluded by .dockerignore: stat novnc-index.html: file does not exist
Knowing that my dockerfile is in /src and i'm building in /src . I tried to rebuild with the --no-cache flag and use export DOCKER_BUILDKIT=1 . But still I'm stuck with this problem .
On a normal server e.g. a Linode VPS I would normally do:
localectl set-locale LANG=<locale>.utf8
timedatectl set-timezone <timezone>
But since systemd is not present or does not work on containers I get:
Failed to create bus connection: No such file or directory
Now, my goal is just to change these settings without using systemd but such approach seems to go undocumented. Is there a reference for non-systemd alternatives to config tools?
Some documentation about locale setting in arch wiki: https://wiki.archlinux.org/index.php/locale
In Dockerfile, adjust LANG to your desired locale. You can add more than one locale in /etc/locale.gen to have a choice later.
Works on debian, arch, but locale-gen misses on fedora:
ENV LANG=en_US.utf8
RUN echo "$LANG UTF-8" >> /etc/locale.gen
RUN locale-gen
RUN update-locale --reset LANG=$LANG
More general is localedef, works on fedora, too:
ENV LANG=en_US.UTF-8
localedef --verbose --force -i en_US -f UTF-8 en_US.UTF-8
Put this in your Dockerfile
ENV TZ=America/Denver
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
Edit .bash_profile or .bashrc from root and add the following.
TZ='Asia/Kolkata'
export TZ
Save file and commit image after its done.
Based on a technique used in sti-base, I came up with the following workaround for https://github.com/ncoghlan/fedbuildenv/blob/09a18d91e7af64a45394669bac2595a4b628960d/Dockerfile#L26:
# Set a useful default locale
RUN echo "export LANG=en_US.utf-8" > /opt/export_LANG.sh
ENV BASH_ENV=/opt/export_LANG.sh \
ENV=/opt/export_LANG.sh \
PROMPT_COMMAND="source /opt/export_LANG.sh"
BASH_ENV covers non-interactive bash sessions, ENV covers sh sessions, and PROMPT_COMMAND covers interactive bash sessions.
this seems to be the debians's equivalent of locale-gen:
RUN localedef -v -c -i fr_FR -f UTF-8 fr_FR.UTF-8 || true
I'm trying to install nvm within a Dockerfile. It seems like it installs OK, but the nvm command is not working.
Dockerfile:
# Install nvm
RUN git clone http://github.com/creationix/nvm.git /root/.nvm;
RUN chmod -R 777 /root/.nvm/;
RUN sh /root/.nvm/install.sh;
RUN export NVM_DIR="$HOME/.nvm";
RUN echo "[[ -s $HOME/.nvm/nvm.sh ]] && . $HOME/.nvm/nvm.sh" >> $HOME/.bashrc;
RUN nvm ls-remote;
Build output:
Step 23/39 : RUN git clone http://github.com/creationix/nvm.git /root/.nvm;
---> Running in ca485a68b9aa
Cloning into '/root/.nvm'...
---> a6f61d486443
Removing intermediate container ca485a68b9aa
Step 24/39 : RUN chmod -R 777 /root/.nvm/
---> Running in 6d4432926745
---> 30e7efc5bd41
Removing intermediate container 6d4432926745
Step 25/39 : RUN sh /root/.nvm/install.sh;
---> Running in 79b517430285
=> Downloading nvm from git to '$HOME/.nvm'
=> Cloning into '$HOME/.nvm'...
* (HEAD detached at v0.33.0)
master
=> Compressing and cleaning up git repository
=> Appending nvm source string to /root/.profile
=> bash_completion source string already in /root/.profile
npm info it worked if it ends with ok
npm info using npm#3.10.10
npm info using node#v6.9.5
npm info ok
=> Installing Node.js version 6.9.5
Downloading and installing node v6.9.5...
Downloading https://nodejs.org/dist/v6.9.5/node-v6.9.5-linux-x64.tar.xz...
######################################################################## 100.0%
Computing checksum with sha256sum
Checksums matched!
Now using node v6.9.5 (npm v3.10.10)
Creating default alias: default -> 6.9.5 (-> v6.9.5 *)
/root/.nvm/install.sh: 136: [: v6.9.5: unexpected operator
Failed to install Node.js 6.9.5
=> Close and reopen your terminal to start using nvm or run the following to use it now:
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm
---> 9f6f3e74cd19
Removing intermediate container 79b517430285
Step 26/39 : RUN export NVM_DIR="$HOME/.nvm";
---> Running in 1d768138e3d5
---> 8039dfb4311c
Removing intermediate container 1d768138e3d5
Step 27/39 : RUN echo "[[ -s $HOME/.nvm/nvm.sh ]] && . $HOME/.nvm/nvm.sh" >> $HOME/.bashrc;
---> Running in d91126b7de62
---> 52313e09866e
Removing intermediate container d91126b7de62
Step 28/39 : RUN nvm ls-remote;
---> Running in f13c1ed42b3a
/bin/sh: 1: nvm: not found
The command '/bin/sh -c nvm ls-remote;' returned a non-zero code: 127
The error:
Step 28/39 : RUN nvm ls-remote;
---> Running in f13c1ed42b3a
/bin/sh: 1: nvm: not found
The command '/bin/sh -c nvm ls-remote;' returned a non-zero code: 127
The end of my /root/.bashrc file looks like:
[[ -s /root/.nvm/nvm.sh ]] && . /root/.nvm/nvm.sh
Everything else in the Dockerfile works. Adding the nvm stuff is what broke it. Here is the full file.
I made the following changes to your Dockerfile to make it work:
First, replace...
RUN sh /root/.nvm/install.sh;
...with:
RUN bash /root/.nvm/install.sh;
Why? On Redhat-based systems, /bin/sh is a symlink to /bin/bash. But on Ubuntu, /bin/sh is a symlink to /bin/dash. And this is what happens with dash:
root#52d54205a137:/# bash -c '[ 1 == 1 ] && echo yes!'
yes!
root#52d54205a137:/# dash -c '[ 1 == 1 ] && echo yes!'
dash: 1: [: 1: unexpected operator
Second, replace...
RUN nvm ls-remote;
...with:
RUN bash -i -c 'nvm ls-remote';
Why? Because, the default .bashrc for a user in Ubuntu (almost at the top) contains:
# If not running interactively, don't do anything
[ -z "$PS1" ] && return
And the source-ing of nvm's scripts takes place at the bottom. So we need to make sure that bash is invoked interactively by passing the argument -i.
Third, you could skip the following lines in your Dockerfile:
RUN export NVM_DIR="$HOME/.nvm";
RUN echo "[[ -s $HOME/.nvm/nvm.sh ]] && . $HOME/.nvm/nvm.sh" >> $HOME/.bashrc;
Why? Because bash /root/.nvm/install.sh; will automatically do it for you:
[fedora#myhost ~]$ sudo docker run --rm -it 2a283d6e2173 tail -2 /root/.bashrc
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm
Instalation of nvm on ubuntu in dockerfile
In the case of Ubuntu 20.04 you can use only these commands and everything will be alright
FROM ubuntu:20.04
RUN apt update -y && apt upgrade -y && apt install wget bash -y
RUN wget -qO- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash
RUN bash -i -c 'nvm ls-remote'
hopefully it will work
Problem
My wercker build exits with Failed step: setup environment - Command exited with exit code: 1 when I'm switching user in my Docker image. I'm running wercker dev from the commandline. The Dockerfile builds fine with Docker itself on the commandline, as well as on Docker Hub. I can run it fine. It's just when I use it for wercker, that the error occurs.
For example in my Dockerfile is the following code:
# Adding user
RUN adduser --disabled-password --gecos '' dockworker && adduser dockworker sudo && echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
RUN mkdir -p /home/dockworker && chown -R dockworker:dockworker /home/dockworker
USER dockworker # Line the build seems to break on
When I comment this line out, it seems to pass. Now the problem with this, for me, is the following: I'd like to switch to another user, since I'm trying to install nvm (for gulp, bower). Generally I don't prefer to install this this as root, therefore I add a user for this.
Workaround?
However, when I do install nvm as root in my Dockerfile (so just removing the user related lines in the codeblock above completely):
ENV NODE_VERSION 0.12.7
ENV NVM_DIR /usr/local/nvm
# NVM
RUN curl https://raw.githubusercontent.com/creationix/nvm/v0.25.4/install.sh | NVM_DIR=/usr/local/nvm bash
#install the specified node version and set it as the default one, install the global npm packages
RUN . /usr/local/nvm/nvm.sh && nvm install $NODE_VERSION && nvm alias default $NODE_VERSION && npm install -g bower && npm install -g gulp
Then it does get past the setup environment stage, but during the steps it errors out that nvm and npm are not found. The step in the wercker.yml:
box:
id: francobolli/docker-ubuntu-14.04-php-5.6
tag: latest
env:
NVM_DIR: /usr/local/nvm
dev:
steps:
- script:
name: gulp styles and javascript
code: |
npm install
bower install --allow-root
gulp --env=production
I don't really understand this. When I run both docker images from the commandline (so with wercker removed from the context completely) I can execute nvm and npm just fine, but when I'm running it through wercker, it seems the .bashrc file is not being executed. When I cat ~/.bashrc during the steps, I can see:
export NVM_DIR="/usr/local/nvm"
[ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh" # This loads nvm
Workaround!
When I enter this in a step, it will be executed and I can npm install without a problem, so it seems this is never executed through the .bashrc:
...
- script:
name: gulp styles and javascript
code: |
[ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh" # It works when I put it here, but it's also in ~/.bashrc, which doesn't seem to get executed
npm install
...
Note: If I source ~/.bashrc in the wercker step instead, it does not work.
Question
So my question is: What am I doing wrong, for not being able to switch user in the Wercker build and even if I could, would I have the same problem as running nvm with root: nvm and npm CAN be found when a Docker container is instantiated from the commandline, but CAN'T be found when running it with Wercker. What's the best solution?
I'd rather not add commands in the wercker.yml if it can be resolved through proper user configuration or proper nvm configuration. Sorry if I'm missing something very obvious.
This has nothing to do with Docker configuration, but with how Wercker handles Docker boxes. From the documentation:
Using Sudo
The sudo command is no longer supported in wercker v2 and effectively does nothing when used.
And for deployment:
Please note that if you update a project to make use of Docker (Ewok version) and this project has autodeployment, this deploy will most likely fail. We will update our documentation in the future on how to deploy these containers.
However, I did get it to build (and deploy) with the solution (temporary workaround?) as displayed in the original question.
I have a dockerfile that download and builds GTK from source, but the following line is not updating my image's environment variable:
RUN PATH="/opt/gtk/bin:$PATH"
RUN export PATH
I read that that I should be using ENV to set environment values, but the following instruction doesn't seem to work either:
ENV PATH /opt/gtk/bin:$PATH
This is my entire Dockerfile:
FROM ubuntu
RUN apt-get update
RUN apt-get install -y golang gcc make wget git libxml2-utils libwebkit2gtk-3.0-dev libcairo2 libcairo2-dev libcairo-gobject2 shared-mime-info libgdk-pixbuf2.0-* libglib2-* libatk1.0-* libpango1.0-* xserver-xorg xvfb
# Downloading GTKcd
RUN wget http://ftp.gnome.org/pub/gnome/sources/gtk+/3.12/gtk+-3.12.2.tar.xz
RUN tar xf gtk+-3.12.2.tar.xz
RUN cd gtk+-3.12.2
# Setting environment variables before running configure
RUN CPPFLAGS="-I/opt/gtk/include"
RUN LDFLAGS="-L/opt/gtk/lib"
RUN PKG_CONFIG_PATH="/opt/gtk/lib/pkgconfig"
RUN export CPPFLAGS LDFLAGS PKG_CONFIG_PATH
RUN ./configure --prefix=/opt/gtk
RUN make
RUN make install
# running ldconfig after make install so that the newly installed libraries are found.
RUN ldconfig
# Setting the LD_LIBRARY_PATH environment variable so the systems dynamic linker can find the newly installed libraries.
RUN LD_LIBRARY_PATH="/opt/gtk/lib"
# Updating PATH environment program so that utility binaries installed by the various libraries will be found.
RUN PATH="/opt/gtk/bin:$PATH"
RUN export LD_LIBRARY_PATH PATH
# Collecting garbage
RUN rm -rf gtk+-3.12.2.tar.xz
# creating go code root
RUN mkdir gocode
RUN mkdir gocode/src
RUN mkdir gocode/bin
RUN mkdir gocode/pkg
# Setting the GOROOT and GOPATH enviornment variables, any commands created are automatically added to PATH
RUN GOROOT=/usr/lib/go
RUN GOPATH=/root/gocode
RUN PATH=$GOPATH/bin:$PATH
RUN export GOROOT GOPATH PATH
You can use Environment Replacement in your Dockerfile as follows:
ENV PATH="${PATH}:/opt/gtk/bin"
Although the answer that Gunter posted was correct, it is not different than what I already had posted. The problem was not the ENV directive, but the subsequent instruction RUN export $PATH
There's no need to export the environment variables, once you have declared them via ENV in your Dockerfile.
As soon as the RUN export ... lines were removed, my image was built successfully
[I mentioned this in response to the selected answer, but it was suggested to make it more prominent as an answer of its own]
It should be noted that
ENV PATH="/opt/gtk/bin:${PATH}"
may not be the same as
ENV PATH="/opt/gtk/bin:$PATH"
The former, with curly brackets, might provide you with the host's PATH. The documentation doesn't suggest this would be the case, but I have observed that it is. This is simple to check just do RUN echo $PATH and compare it to RUN echo ${PATH}
This is discouraged (if you want to create/distribute a clean Docker image), since the PATH variable is set by /etc/profile script, the value can be overridden.
head /etc/profile:
if [ "`id -u`" -eq 0 ]; then
PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
else
PATH="/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games"
fi
export PATH
At the end of the Dockerfile, you could add:
RUN echo "export PATH=$PATH" > /etc/environment
So PATH is set for all users.