Running sbt in docker as non-root user - docker

Trying to create a docker image that has sbt installed and can build sbt projects but, when building, will not be running as the root user (this is all in the context of running Jenkins inside docker).
Dockerfile sets up sbt
ENV SBT_VERSION=1.1.6
RUN \
curl -L -o sbt-$SBT_VERSION.deb http://dl.bintray.com/sbt/debian/sbt-$SBT_VERSION.deb && \
dpkg -i sbt-$SBT_VERSION.deb && \
rm sbt-$SBT_VERSION.deb && \
apt-get update && \
apt-get install sbt && \
sbt sbtVersion
And if I then run sbt as the root user, all works ok
docker exec -u root myjenkins sbt sbtVersion
produces
[warn] No sbt.version set in project/build.properties, base directory: /
[info] Set current project to root (in build file:/)
[info] 1.1.6
But when I run sbt as the jenkins user, it tries to download sbt 1.1.6 again and eventually fails when it tries to modify an apt system file.
docker exec -u jenkins myjenkins sbt sbtVersion
produces:
Getting org.scala-sbt sbt 1.1.6 (this may take some time)...
downloading https://repo1.maven.org/maven2/org/scala-sbt/sbt/1.1.6/sbt-1.1.6.jar ...
[SUCCESSFUL ] org.scala-sbt#sbt;1.1.6!sbt.jar (68ms)
.
.
.
[warn] No sbt.version set in project/build.properties, base directory: /
[error] java.io.FileNotFoundException: /var/cache/apt/archives/lock (Permission denied)

I understand that all of the "RUN" commands in your Dockerfile are as a root user.
SBT downloading Scala: Check where it is downloading. SBT by default downloads dependencies on ~/.ivy2 (and/or ~/.m2). If you change user, your home also changes, so it will look for dependencies in /home/jenkins/.ivy2, then on .ivy2 (double-check on this), which do not have those dependencies downloaded already, so it tries to download them.
About the var/cache/apt/archives/lock, it is trying to install via SBT via apt with your jenkins user, when you need to be privileged user to use apt. Your app-user should not need to install anything (or anything that requires root access), but rather build an image with all required installs and then use it as a separate user. Also, if apt gives you headaches, you can just install via download into folder, something like:
RUN \
curl -fsL http://downloads.typesafe.com/scala/$SCALA_VERSION/scala-$SCALA_VERSION.tgz | tar xfz - -C /usr/local && \
ln -s /usr/local/scala-$SCALA_VERSION/bin/* /usr/local/bin/
PS: You may want to run your container always as jenkins user, in that case you can use USER jenkins after you finished installations and do any additional unprivileged operations there.

Related

Run vscode in docker

I want to run vscode in docker for internal test, i've created the following
FROM debian:stable
RUN apt-get update && apt-get install -y apt-transport-https curl gpg
RUN curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.gpg \
&& install -o root -g root -m 644 microsoft.gpg /etc/apt/trusted.gpg.d/ \
&& echo "deb [arch=amd64] https://packages.microsoft.com/repos/vscode stable main" > /etc/apt/sources.list.d/vscode.list
RUN apt-get update && apt-get install -y code libx11-xcb-dev libasound2
RUN code --user-data-dir="~/.vscode-root"
I use to build
docker build -t vscode .
I use to run
docker run vscode code -v
when I run it like this I got error
You are trying to start vscode as a super user which is not recommended. If you really want to, you must specify an alternate user data directory using the --user-data-dir argument.
I just want to verify it by running something like RUN code -v how can I do it ?
should I change the user ? I just want to run vscode in docker and use some vscode apis
Have you tried using VSCode's built in functionality for developing in a container?
Checkout this page which describes how to do this:
Developing inside a Container
You can try out some of the sample container configurations provided by VSCode and use any of those devcontainer.json files as an example to configure a custom development container to your liking. According to the page above:
Workspace files are mounted from the local file system or copied or cloned into the container. Extensions are installed and run inside the container, where they have full access to the tools, platform, and file system. This means that you can seamlessly switch your entire development environment just by connecting to a different container.
This is a very handy way to have different environments for development that are isolated within the container.

Docker run vs build - build gstreamer Different behaviour

I'm trying to build a docker image that uses nvidia hardware decoding in gstreamer and have encountered a strange problem with making the image.
The build process does not find the nvidia cuda related stuff while running docker build (or nvidia-docker build), but when I spin up the failed image as a container and do those very same steps from within the container everything works. I even saved the container as image which gave me a persistent image that works as intended.
Has anyone experienced similar problem and can shed some light on it?
Dockerfile:
FROM nvcr.io/nvidia/deepstream:3.0-18.11 AS base
ENV DEBIAN_FRONTEND noninteractive
#install some dependencies. NOTE - not removing apt cache for the MWE
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
libdc1394-22 \
tmux \
vim \
libjpeg-dev \
libpng-dev \
libpng12-dev \
cuda-toolkit-10-0 \
python3-setuptools \
python3-pip ninja-build pkg-config gobject-introspection gnome-devel bison flex libgirepository1.0-dev liborc-0.4-dev
RUN pip3 install meson && ldconfig
FROM base
#pull and make gstreamer:
RUN cd /tmp && mkdir gstreamer
RUN git clone https://github.com/GStreamer/gst-build.git /tmp/gstreamer \
&& cd /tmp/gstreamer \
&& git checkout tags/1.16.0 \
&& ./setup.py -Dgtk_doc=disabled -Dgst-plugins-bad:nvdec=enabled -Dgst-plugins-bad:nvenc=enabled -Dgst-plugins-bad:iqa=disabled -Dgst-plugins-bad:bluez=disabled --reconfigure \
&& ninja -C build \
&& ninja install -C build
Testing:
build and run the container. Inside the container:
$ gst-inspect-1.0 nvdec
No such element or plugin 'nvdec'
$ cd /tmp/gstreamer
$ ./setup.py -Dgtk_doc=disabled -Dgst-plugins-bad:nvdec=enabled -Dgst-plugins-bad:nvenc=enabled -Dgst-plugins-bad:iqa=disabled -Dgst-plugins-bad:bluez=disabled --reconfigure
$ ninja -C build
$ ninja install -C build
$ gst-inspect-1.0 nvdec
Factory Details:
Rank primary (256)
[... all plugin parameters show up]
GObject
+----GInitiallyUnowned
+----GstObject
+----GstElement
+----GstVideoDecoder
+----GstNvDec
EDIT1
The image builds with no errors, only when I try to call gstreamer it is built with no acceleration. I noticed that in the build process the major difference is
meson.build:109:2: Exception: Problem encountered: The nvdec plugin was enabled explicitly, but required CUDA dependencies were not found.
which does not happen when building from within the container.
Lack of error is related, most likely, to the ninja+meson build system which looks for compatible packages, reports the exception, but doesn't throw it and continues as if nothing wrong happened
EDIT2
Answering comment:
To build it and get the error, just build the attached docker image:
sudo docker build -t gst16:latest . > build.log
This will dump all the output into the build.log file.
I don't have a docker registry that I could use for this and the docker image gets quite big by docker standards (~8 Gigs), but to produce successfully, it's fairly simple:
sudo docker run --runtime="nvidia" -ti gst16:latest /bin/bash
or
sudo nvidia-docker run -ti gst16:latest /bin/bash
which seems to work the same for me. Notice no --rm flag! From within the container:
#check if nvidia decoder plugin is there:
gst-inspect-1.0 nvdec
#fail!
#now build it from within:
cd /opt/gstreamer
./setup.py -Dgtk_doc=disabled -Dgst-plugins-bad:nvdec=enabled -Dgst-plugins-bad:nvenc=enabled -Dgst-plugins-bad:iqa=disabled -Dgst-plugins-bad:bluez=disabled --reconfigure
ninja -C build
ninja install -C build
gst-inspect-1.0 nvdec
#success reported
Now to get the image, exit the container (ctrl+d) and in the host shell:
sudo docker container ls -a to view all containers including stopped ones
from gst16:latest get the CONTAINER_ID and copy it
sudo docker commit <CONTAINER_ID> gst16:manual and after a few seconds you should have the container saved as an image. Verify with sudo docker images
run the new image with sudo docker run --runtime=`nvidia` --rm -ti gst16:manual /bin/bash
from within the container try again the gst-inspect-1.0 nvdec to verify it's working
EDIT3
$ nvidia-docker --version
Docker version 18.09.0, build 4d60db4
I think I found the solution/reason
Writing it here in case someone finds themselves in similar situation, plus I hate finding old threads with similar problem and no answer or "nevermind, I solved it" as the only follow up
Docker build does not have any ties to nvidia runtime and gstreamer requires access to the full nvidia toolchain in order to build the plugins that need it. This is to be resolved with gstreamer 1.18 but until then, there is no way to build gstreamer with nvidia codecs in docker build.
The workaround:
Build image with all dependencies.
Run a container of said image using runtime="nvidia" but don't use --rm flag
In the container, build gstreamer and install it as normally.
Verify with gst-inspect-1.0
Commit the container as new image: docker commit <container_name> <temporary_image_name>
Tag the temporary image properly.

Docker file owners and groups

I think I have a dilemma. I am trying to create a Dockerfile to reproduce a long and complicated installation process (of ROS) so that my students can get it running with less headache.
I am combining various scripts provided with manual steps that are documented. The manual steps often say to do "sudo" but I am told that doing sudo inside a Dockerfile is to be avoided. So I move those steps to before the USER command in the Dockerfile because I am told that those commands run as root. However as a result the files and directories created are owned by root and I believe subsequent steps are failing.
I have two choices I think: move the commands to after the USER command and include sudo or try to make the install scripts create directories and files of the right ownership. Of course a priori I dont know what files and directories are going to be created.
Here is my Dockerfile (actually one of many I have been experimenting with.) Also if you see any other things that need to be improved or fixed please let me know!
FROM ubuntu:16.04
# create non-root user
ENV USERNAME ros
RUN adduser --ingroup sudo --disabled-password --gecos "" --shell /bin/bash --home /home/$USERNAME $USERNAME
RUN bash -c 'echo $USERNAME:ros | chpasswd'
ENV HOME /home/$USERNAME
RUN apt-get update && apt-get install --assume-yes wget sudo && \
wget https://raw.githubusercontent.com/ROBOTIS-GIT/robotis_tools/master/install_ros_kinetic.sh && \
chmod 755 ./install_ros_kinetic.sh && \
bash ./install_ros_kinetic.sh
RUN apt-get install --assume-yes ros-kinetic-joy ros-kinetic-teleop-twist-joy ros-kinetic-teleop-twist-keyboard ros-kinetic-laser-proc ros-kinetic-rgbd-launch ros-kinetic-depthimage-to-laserscan ros-kinetic-rosserial-arduino ros-kinetic-rosserial-python ros-kinetic-rosserial-server ros-kinetic-rosserial-client ros-kinetic-rosserial-msgs ros-kinetic-amcl ros-kinetic-map-server ros-kinetic-move-base ros-kinetic-urdf ros-kinetic-xacro ros-kinetic-compressed-image-transport ros-kinetic-rqt-image-view ros-kinetic-gmapping ros-kinetic-navigation ros-kinetic-interactive-markers
USER $USERNAME
WORKDIR /home/$USERNAME
RUN cd /home/$USERNAME/catkin_ws/src/ && \
git clone https://github.com/ROBOTIS-GIT/turtlebot3_msgs.git && \
git clone https://github.com/ROBOTIS-GIT/turtlebot3.git && \
git clone https://github.com/ROBOTIS-GIT/turtlebot3_simulations.git
# add catkin env
RUN echo 'source /opt/ros/kinetic/setup.bash' >> /home/$USERNAME/.bashrc
RUN echo 'source /home/ros/catkin_ws/devel/setup.bash' >> /home/$USERNAME/.bashrc
# RUN . /home/ros/.bashrc && \
# cd /home/$USERNAME/catkin_ws && \
# catkin_make
USER $USERNAME
ENTRYPOINT /bin/bash
Would be interesting for my own information to get why sudo should be avoided in containers.
Historically we use docker to automate build, test and deploy processes in our team and always tried to write Dockerfiles as close as possible to original process.
Lets say if you build in your host some app and launch some commands with sudo, some without, we managed to create exactly the same Dockerfiles. The positive feedback from this is that you are not obligated to write readme's on how to build the code anymore - you just supply Dockerfile and whenever someone wants to repeat all steps in non-container environment, he just follows (copy/pastes) commands from the file.
So my proposal is - in Dockerfile install packages first, then switch to user and proceed with all remaining steps, using sudo when necessary. You will have all artifacts owned by the user, not root.
UPD
Got the original discussion and this one. So it sounds like you choose the best approach based on your particular case and needs.

How to access root folder inside a Docker container

I am new to docker, and am attempting to build an image that involves performing an npm install. Some of our the dependencies are coming from private repos we have, and I am hitting an SSH related issue:
I realised I was not supplying any form of SSH details to my file, and came across various posts online about how to do this using args into the docker build command.
So taken from here, I have added the following to my dockerfile before the npm install command gets run:
ARG ssh_prv_key
ARG ssh_pub_key
RUN apt-get update && \
apt-get install -y \
git \
openssh-server \
libmysqlclient-dev
# Authorize SSH Host
RUN mkdir -p /root/.ssh && \
chmod 0700 /root/.ssh && \
ssh-keyscan github.com > /root/.ssh/known_hosts
# Add the keys and set permissions
RUN echo "$ssh_prv_key" > /root/.ssh/id_rsa && \
echo "$ssh_pub_key" > /root/.ssh/id_rsa.pub && \
chmod 600 /root/.ssh/id_rsa && \
chmod 600 /root/.ssh/id_rsa.pub
So running the docker build command again with the correct args supplied, I do see further activity in the console that suggests my SSH key is being utilised:
But as you can see I am getting no hostkey alg messages and
I still getting the same 'Host key verification failed' error. I was wondering if I could view the log file it references in the error:
Do I need to get the image running in order to be able to connect to it and browse the 'root' folder?
I hope I have made sense, please be gentle I am a docker noob!
Thanks
The lines that start with —-> in the docker build output are valid Docker image IDs. You can pick any of these and docker run them:
docker run --rm -it 59c45dac474a sh
If a step is actually failing, one useful debugging trick is to launch the image built in the step before it and run the command by hand.
Remember that anyone who has your image can do this; the way you’ve built it, if you ever push your image to any repository, your ssh private key is there for the taking, and you should probably consider it compromised. That’s doubly true since it will also be there in plain text in docker history output.

Automatic building and installing Packages from AUR for Arch Linux inside Docker with yaourt and >makepkg-4.2.0

I'm using Docker and Arch Linux inside the Docker-Container.
Introducing makepkg-4.2.0 my Installation Command with yaourt were broken like described here: https://github.com/archlinuxfr/yaourt/issues/67
The Problem is, that yaourt should be run as non-root user. But as yaourt wants also to install the Package in every Case, after it has built it, root user is needed or a user that has the Permission to install Packages.
So my Question ist how to solve this? I want to install a Package from AUR inside the Docker, because no official Package exists yet.
Until now i was using Arch Linux, pacman and yaourt.
So the Command,
RUN yaourt -S --noconfirm aur/taskd
that installs taskd, worked before makepkg-4.2.0:
With the new makepkg Version building the Docker fails with the following Error from yaourt:
makepkg: invalid option '--asroot'
If i change the user to a non-root User and try to install the Package i get a Command prompt in my automated build asking for the Root-Password for actually installing the Package.
Password: su: Authentication information cannot be recovered
Password: su: Authentication information cannot be recovered
Password: su: Authentication information cannot be recovered
The command [/bin/sh -c yaourt -S --noconfirm aur/taskd] returned a non-zero code: 1
Without polluting to many offtopic Lines spread over two Dockerfiles, the interesting Portion of the Dockerfile looks like:
FROM kaot/arch_linux_base:latest
MAINTAINER Kaot
RUN useradd --no-create-home --shell=/bin/false yaourt && usermod -L yaourt
RUN yaourt -S --noconfirm aur/taskd
ENTRYPOINT ["/controlcenter/controlcenter.sh"]
CMD ["cc:start"]
If found a Solution that let yaourt only download the Info how to build the Package, then invoke makepkg itself, both with an non-root User and afterwards install the build Package with the root User and pacman.
The Portion of the Dockerfile looks like this
RUN mkdir -p /tmp/Package/ && chown yaourt /tmp/Package
USER yaourt
RUN cd /tmp/Package && pwd && ls -al && yaourt --getpkgbuild aur/taskd && cd taskd && makepkg --pkg taskd
USER root
RUN pacman -U --noconfirm /tmp/Package/taskd/taskd-1.1.0-1-x86_64.pkg.tar.xz
With some Variables, further Enhancements could be achieved, but in Principle this works.

Resources