i would like to change the default path from Jenkins under /var/lib/jenkins to /data
I try a few things like change the path under /etc/default/jenkins the JENKINS_HOME or add a file under /etc/profile.d/jenkins.sh with the following content : export JENKINS_HOME=/data i already copy all files under /var/lib/jenkins with cp -R /var/lib/jenkins/* /data But everytime i become the following Error in the Jenkins GUI :
---- Debugging information ----
message : com.michelin.cio.hudson.plugins.rolestrategy.RoleBasedAuthorizationStrategy
cause-exception : com.thoughtworks.xstream.mapper.CannotResolveClassException
cause-message : com.michelin.cio.hudson.plugins.rolestrategy.RoleBasedAuthorizationStrategy
class : hudson.model.Hudson
required-type : hudson.model.Hudson
converter-type : hudson.util.RobustReflectionConverter
path : /hudson/authorizationStrategy
line number : 14
version : not available
-------------------------------
env :
root#ubuntu:/home/ubuntu# env
SHELL=/bin/bash
TERM=xterm
USER=root
LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd =40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;4 4:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01; 31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7 z=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=0 1;31:*.lzo=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz =01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:* .rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;3 1:*.cab=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm =01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35: *.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=0 1;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:* .m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01; 35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl= 01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.o gv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;36:*.mid=00;36 :*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=0 0;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.xspf=00;36:
SUDO_USER=ubuntu
SUDO_UID=1000
USERNAME=root
MAIL=/var/mail/root
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/us r/local/games
PWD=/home/ubuntu
LANG=de_DE.UTF-8
SHLVL=1
SUDO_COMMAND=/bin/su
HOME=/root
LOGNAME=root
LESSOPEN=| /usr/bin/lesspipe %s
SUDO_GID=1000
LESSCLOSE=/usr/bin/lesspipe %s %s
_=/usr/bin/env
root#ubuntu:/home/ubuntu#
root .profile File
~/.profile: executed by Bourne-compatible login shells.
if [ "$BASH" ]; then
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
fi
mesg n || true
export JENKINS_HOME=/data
So your JENKINS_HOME environment variable is not correctly set.
For Ubuntu, you have to update this profile file ~/.profile and add this line export JENKINS_HOME=/data.
Next, you have to logout/login to take into account this change.
If Jenkins is running with a system account, open a session with this account and launch this command: . ~/.profile
Related
I'm using this container to set up X11 in GitPod.
ARG base
FROM ${base}
# Dazzle does not rebuild a layer until one of its lines are changed. Increase this counter to rebuild this layer.
ENV TRIGGER_REBUILD=1
# Install Xvfb, JavaFX-helpers and Openbox window manager
RUN sudo install-packages xvfb x11vnc xterm openjfx libopenjfx-java openbox
# Overwrite this env variable to use a different window manager
ENV WINDOW_MANAGER="openbox"
USER root
# Change the default number of virtual desktops from 4 to 1 (footgun)
RUN sed -ri "s/<number>4<\/number>/<number>1<\/number>/" /etc/xdg/openbox/rc.xml
# Install novnc
RUN git clone --depth 1 https://github.com/novnc/noVNC.git /opt/novnc \
&& git clone --depth 1 https://github.com/novnc/websockify /opt/novnc/utils/websockify
COPY novnc-index.html /opt/novnc/index.html
# Add VNC startup script
COPY start-vnc-session.sh /usr/bin/
RUN chmod +x /usr/bin/start-vnc-session.sh
USER gitpod
# This is a bit of a hack. At the moment we have no means of starting background
# tasks from a Dockerfile. This workaround checks, on each bashrc eval, if the X
# server is running on screen 0, and if not starts Xvfb, x11vnc and novnc.
RUN echo "export DISPLAY=:0" >> /home/gitpod/.bashrc.d/300-vnc
RUN echo "[ ! -e /tmp/.X0-lock ] && (/usr/bin/start-vnc-session.sh &> /tmp/display-\${DISPLAY}.log)" >> /home/gitpod/.bashrc.d/300-vnc
USER root
### checks ###
# no root-owned files in the home directory
RUN notOwnedFile=$(find . -not "(" -user gitpod -and -group gitpod ")" -print -quit) \
&& { [ -z "$notOwnedFile" ] \
|| { echo "Error: not all files/dirs in $HOME are owned by 'gitpod' user & group"; exit 1; } }
USER gitpod
This is where it gets sketchy :
# Install novnc
RUN git clone --depth 1 https://github.com/novnc/noVNC.git /opt/novnc \
&& git clone --depth 1 https://github.com/novnc/websockify /opt/novnc/utils/websockify
COPY novnc-index.html /opt/novnc/index.html
I get this output please help !
COPY failed: file not found in build context or excluded by .dockerignore: stat novnc-index.html: file does not exist
Knowing that my dockerfile is in /src and i'm building in /src . I tried to rebuild with the --no-cache flag and use export DOCKER_BUILDKIT=1 . But still I'm stuck with this problem .
In my Dockerfile I have next command:
FROM perl:5.30.3-buster
...
RUN env && source $PERLBREW_ROOT/etc/bashrc && env && carton install
when source $PERLBREW_ROOT/etc/bashrc was executed the PERL5LIB is disappeared and carton can not find libraries indicated by PERL5LIB path
Output from command:
Step 18/22 : RUN env && source $PERLBREW_ROOT/etc/bashrc && env && carton install
---> Running in 358cb3aa042e
HOSTNAME=358cb3aa042e
PERL_VERSION=5.34.1
PERLBREW_HOME=/opt/perlbrew/home
PWD=/
HOME=/root
PERL5LIB=/opt/proj/lib:/opt/proj/local/lib/perl5:
SHLVL=1
PATH=/opt/perlbrew/bin:/opt/proj/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PERLBREW_ROOT=/opt/perlbrew
_=/usr/bin/env
HOSTNAME=358cb3aa042e
PERL_VERSION=5.34.1
PERLBREW_PATH=/opt/perlbrew/bin:/opt/perlbrew/perls/perl-5.34.1/bin
PERLBREW_HOME=/opt/perlbrew/home
PERLBREW_SHELLRC_VERSION=0.94
PWD=/
MANPATH=/opt/perlbrew/perls/perl-5.34.1/man:
PERLBREW_PERL=perl-5.34.1
HOME=/root
PERLBREW_VERSION=0.94
SHLVL=1
PATH=/opt/perlbrew/bin:/opt/perlbrew/perls/perl-5.34.1/bin:/opt/proj/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PERLBREW_ROOT=/opt/perlbrew
PERLBREW_MANPATH=/opt/perlbrew/perls/perl-5.34.1/man
_=/usr/bin/env
But running same command on host system does not wipe PERL5LIB:
(env | grep PERL) && source ~/perl5/perlbrew/etc/bashrc && (env | grep PERL)
PERLBREW_PATH=/home/kes/perl5/perlbrew/bin:/home/kes/perl5/perlbrew/perls/perl-5.30.3/bin
PERLBREW_HOME=/home/kes/.perlbrew
PERLBREW_SHELLRC_VERSION=0.89
PERLBREW_PERL=perl-5.30.3
PERLBREW=command perlbrew
PERLBREW_VERSION=0.89
PERL5LIB=xxx
PERLBREW_ROOT=/home/kes/perl5/perlbrew
PERLBREW_MANPATH=/home/kes/perl5/perlbrew/perls/perl-5.30.3/man
PERLBREW_PATH=/home/kes/perl5/perlbrew/bin:/home/kes/perl5/perlbrew/perls/perl-5.30.3/bin
PERLBREW_HOME=/home/kes/.perlbrew
PERLBREW_SHELLRC_VERSION=0.89
PERLBREW_PERL=perl-5.30.3
PERLBREW=command perlbrew
PERLBREW_VERSION=0.89
PERL5LIB=xxx
PERLBREW_ROOT=/home/kes/perl5/perlbrew
PERLBREW_MANPATH=/home/kes/perl5/perlbrew/perls/perl-5.30.3/man
Tell me please, what is the problem when running that command from Docker file while building container?
Thank you #ikegami
It was unset by ${PERLBREW_HOME:-}/init. This unset required when you switch between perls and do not want theirs libraries mixed with each other.
For example you are on perl-5.24 and have installed libraries into its PERL5LIB, then you switched to perl-5.28 but PERL5LIB still points to modules installed by different version of perl. This may cause coredumps.
While running docker image, I am receiving the following error:
/entrypoint.sh: line 7: USER: unbound variable
Code of entrypoint.sh.
#!/bin/bash
set -euo pipefail
export SPARK_DIST_CLASSPATH=$(hadoop classpath)
POSTGRES_TIMEOUT=60
echo "Running as: ${USER}"
if [ "${USER}" != "root" ]; then
echo "Changing owner of files in ${AIRFLOW_HOME} to ${USER}"
chown -R "${USER}" ${AIRFLOW_HOME}
fi
set +e
Declaration of Environment variable user in Docker File:
# Delay creation of user and group
ONBUILD ARG THEUSER=afpuser
ONBUILD ARG THEGROUP=hadoop
ONBUILD ENV USER ${THEUSER}
ONBUILD ENV GROUP ${THEGROUP}
ONBUILD RUN groupadd -r "${GROUP}" && useradd -rmg "${GROUP}" "${USER}"
Full code of Docker image is present in the Link
How can I get /etc/profile to run automatically when starting an Alpine Docker container interactively? I have added some aliases to an aliases.sh file and placed it in /etc/profile.d, but when I start the container using docker run -it [my_container] sh, my aliases aren't active. I have to manually type . /etc/profile from the command line each time.
Is there some other configuration necessary to get /etc/profile to run at login? I've also had problems with using a ~/.profile file. Any insight is appreciated!
EDIT:
Based on VonC's answer, I pulled and ran his example ruby container. Here is what I got:
$ docker run --rm --name ruby -it codeclimate/alpine-ruby:b42
/ # more /etc/profile.d/rubygems.sh
export PATH=$PATH:/usr/lib/ruby/gems/2.0.0/bin
/ # env
no_proxy=*.local, 169.254/16
HOSTNAME=6c7e93ebc5a1
SHLVL=1
HOME=/root
TERM=xterm
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
/ # exit
Although the /etc/profile.d/rubygems.sh file exists, it is not being run when I login and my PATH environment variable is not being updated. Am I using the wrong docker run command? Is something else missing? Has anyone gotten ~/.profile or /etc/profile.d/ files to work with Alpine on Docker? Thanks!
The default shell in Alpine Linux is ash.
Ash will only read the /etc/profile and ~/.profile files if it is started as a login shell sh -l.
To force Ash to source the /etc/profile or any other script you want upon its invocation as a non login shell, you need to setup an environment variable called ENV before launching Ash.
e.g. in your Dockerfile
FROM alpine:3.5
ENV ENV="/root/.ashrc"
RUN echo "echo 'Hello, world!'" > "$ENV"
When you build that you get:
deployer#ubuntu-1604-amd64:~/blah$ docker build --tag test .
Sending build context to Docker daemon 2.048kB
Step 1/3 : FROM alpine:3.5
3.5: Pulling from library/alpine
627beaf3eaaf: Pull complete
Digest: sha256:58e1a1bb75db1b5a24a462dd5e2915277ea06438c3f105138f97eb53149673c4
Status: Downloaded newer image for alpine:3.5
---> 4a415e366388
Step 2/3 : ENV ENV "/root/.ashrc"
---> Running in a9b6ff7303c2
---> 8d4af0b7839d
Removing intermediate container a9b6ff7303c2
Step 3/3 : RUN echo "echo 'Hello, world!'" > "$ENV"
---> Running in 57c2fd3353f3
---> 2cee6e034546
Removing intermediate container 57c2fd3353f3
Successfully built 2cee6e034546
Finally, when you run the newly generated container, you get:
deployer#ubuntu-1604-amd64:~/blah$ docker run -ti test /bin/sh
Hello, world!
/ # exit
Notice the Ash shell didn't run as a login shell.
So to answer your query, replace
ENV ENV="/root/.ashrc"
with:
ENV ENV="/etc/profile"
and Alpine Linux's Ash shell will automatically source the /etc/profile script each time the shell is launched.
Gotcha: /etc/profile is normally meant to only be sourced once! So, I would advise that you don't source it and instead source a /root/.somercfile instead.
Source: https://stackoverflow.com/a/40538356
You still can try in your Dockerfile a:
RUN echo '\
. /etc/profile ; \
' >> /root/.profile
(assuming the current user is root. If not, replace /root with the full home path)
That being said, those /etc/profile.d/xx.sh should run.
See codeclimate/docker-alpine-ruby as an example:
COPY files /
With 'files/etc" including an files/etc/profile.d/rubygems.sh running just fine.
In the OP project Dockerfile, there is a
COPY aliases.sh /etc/profile.d/
But the default shell is not a login shell (sh -l), which means profile files (or those in /etc/profile.d) are not sourced.
Adding sh -l would work:
docker#default:~$ docker run --rm --name ruby -it codeclimate/alpine-ruby:b42 sh -l
87a58e26b744:/# echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/lib/ruby/gems/2.0.0/bin
As mentioned by Jinesh before, the default shell in Alpine Linux is ash
localhost:~$ echo $SHELL
/bin/ash
localhost:~$
Therefore simple solution is too add your aliases in .profile. In this case, I put all my aliases in ~/.ash_aliases
localhost:~$ cat .profile
# ~/.profile
# Alias
if [ -f ~/.ash_aliases ]; then
. ~/.ash_aliases
fi
localhost:~$
.ash_aliases file
localhost:~$ cat .ash_aliases
alias a=alias
alias c=clear
alias f=file
alias g=grep
alias l='ls -lh'
localhost:~$
And it works :)
I use this:
docker exec -it my_container /bin/ash '-l'
The -l flag passed to ash will make it behave as a login shell, thus reading ~/.profile
I have a problem with Docker which does not persist commands launch via "RUN".
Here is my Dockerfile :
FROM jenkins:latest
RUN echo "foo" > /var/jenkins_home/toto ; ls -alh /var/jenkins_home
RUN ls -alh /var/jenkins_home
RUN rm /var/jenkins_home/.bash_logout ; ls -alh /var/jenkins_home
RUN ls -alh /var/jenkins_home
RUN echo "bar" >> /var/jenkins_home/.profile ; cat /var/jenkins_home/.profile
RUN cat /var/jenkins_home/.profile
And here is the output :
Sending build context to Docker daemon 373.8 kB Step 1 : FROM jenkins:latest ---> fc39417bd5fb Step 2 : RUN echo "foo" > /var/jenkins_home/toto ; ls -alh /var/jenkins_home ---> Using cache
---> c614b13d9d83 Step 3 : RUN ls -alh /var/jenkins_home ---> Using cache ---> 8a16a0c92f67 Step 4 : RUN rm /var/jenkins_home/.bash_logout ; ls -alh /var/jenkins_home ---> Using cache ---> f6ca5d5bdc64 Step 5 : RUN ls -alh /var/jenkins_home
---> Using cache ---> 3372c3275b1b Step 6 : RUN echo "bar" >> /var/jenkins_home/.profile ; cat /var/jenkins_home/.profile ---> Running in 79842be2c6e3
# ~/.profile: executed by the command interpreter for login shells.
# This file is not read by bash(1), if ~/.bash_profile or ~/.bash_login
# exists.
# see /usr/share/doc/bash/examples/startup-files for examples.
# the files are located in the bash-doc package.
# the default umask is set in /etc/profile; for setting the umask
# for ssh logins, install and configure the libpam-umask package.
#umask 022
# if running bash if [ -n "$BASH_VERSION" ]; then
# include .bashrc if it exists
if [ -f "$HOME/.bashrc" ]; then . "$HOME/.bashrc"
fi fi
# set PATH so it includes user's private bin if it exists if [ -d "$HOME/bin" ] ; then
PATH="$HOME/bin:$PATH" fi bar ---> 28559b8fe041 Removing intermediate container 79842be2c6e3 Step 7 : RUN cat /var/jenkins_home/.profile ---> Running in c694e0cb5866
# ~/.profile: executed by the command interpreter for login shells.
# This file is not read by bash(1), if ~/.bash_profile or ~/.bash_login
# exists.
# see /usr/share/doc/bash/examples/startup-files for examples.
# the files are located in the bash-doc package.
# the default umask is set in /etc/profile; for setting the umask
# for ssh logins, install and configure the libpam-umask package.
#umask 022
# if running bash if [ -n "$BASH_VERSION" ]; then
# include .bashrc if it exists
if [ -f "$HOME/.bashrc" ]; then . "$HOME/.bashrc"
fi fi
# set PATH so it includes user's private bin if it exists if [ -d "$HOME/bin" ] ; then
PATH="$HOME/bin:$PATH" fi ---> b7e47d65d65e Removing intermediate container c694e0cb5866 Successfully built b7e47d65d65e
Do you guys know why "foo" file is not persisted on step 3? Why ".bash_logout" file is recreated on step 5? Why "bar" is not in my ".profile" file anymore on step 7?
And of course, if I start a container based on this image, none of my modifications are persisted... so my Dockerfile is... useless. Any clue?
The reason those changes are not persisted, is that they are inside a volume the Jenkins Dockerfile marks /var/jenkins_home/ as a VOLUME.
Information inside volumes is not persisted during docker build, or more precisely; each build-step creates a new volume based on the image's content, discarding the volume that was used in the previous build step.
How to resolve this?
I think the best way to resolve this, is to;
Add the files you want to modify inside jenkins_home in a different location inside the image, e.g. /var/jenkins_home_overrides/
Create a custom entrypoint based on, or "wrapping", the default entrypoint script that copies the content of your jenkins_home_overrides to jenkins_home the first time the container is started.
Actually...
And just when I wrote that up; It looks like the official Jenkins image already support this out of the box;
https://github.com/jenkinsci/docker/blob/683b0d6ed17016ee3211f247304ef2f265102c2b/jenkins.sh#L5-L23
According to the documentation, you need to add your files to the /usr/share/jenkins/ref/ directory, and those will be copied to /var/jenkins/home upon start.
Also see https://issues.jenkins-ci.org/browse/JENKINS-24986