Enable scl toolset by default - docker

I'm trying to create a docker image where the llvm-toolset-7 is automatically enabled when the image is run.
The context is this image, since it extends from rustembedded/cross:x86_64-unknown-linux-gnu, is to cross-compile to linux. Since for my purposes I need a recent version of llvm, I need the toolset to be enabled by default when I launch the machine because the cross-complilation command is run by the cross cli and not manually by me.
My attempt at the Dockerfile to make this happen is:
FROM rustembedded/cross:x86_64-unknown-linux-gnu
RUN yum update -y && \
yum install centos-release-scl -y && \
yum install llvm-toolset-7 -y && \
yum install scl-utils -y && \
echo "source scl_source enable llvm-toolset-7" >> ~/.bash_profile
However, when I open the interactive shell in docker desktop, it doesn't default to the bash shell with the toolset enabled.
I'm a pretty frequent Ubuntu user but this image is CentOS based and I'm having trouble understanding toolsets.

I guess you use something like docker run -it imagename bash to enter into interactive shell.
Unfortunately, above will by default start a non-login shell, the .bash_profile only be called if it's in login shell, to make it works as login shell, you need to use next to enter into the container:
docker run -it imagename bash -l
-l Make bash act as if it had been invoked as a login shell
Minimal Example
Dockerfile:
FROM rustembedded/cross:x86_64-unknown-linux-gnu
RUN echo "export ABC=1" >> ~/.bash_profile
Execution:
$ docker build -t abc:1 .
$ docker run --rm -it abc:1 bash
[root#669149c2bc8b /]# env | grep ABC
[root#669149c2bc8b /]#
[root#669149c2bc8b /]# exit
$ docker run --rm -it abc:1 bash -l
[root#7be9b9b8e906 /]# env | grep ABC
ABC=1
You can see with -l to make it as login shell, .bash_profile be executed.

Related

Installing Kubernetes in Docker container

I want to use Kubeflow to check it out and see if it fits my projects. I want to deploy it locally as a development server so I can check it out, but I have Windows on my computer and Kubeflow only works on Linux. I'm not allowed to dual boot this computer, I could install a virtual machine, but I thought it would be easier to use docker, and oh boy was I wrong. So, the problem is, I want to install Kubernetes in a docker container, right now this is the Dockerfile I've written:
# Docker file with local deployment of Kubeflow
FROM ubuntu:18.04
ENV USER=Joao
ENV PASSWORD=Password
ENV WK_DIR=/home/${USER}
# Setup Ubuntu
RUN apt-get update -y
RUN apt-get install -y conntrack sudo wget
RUN useradd -rm -d /home/${USER} -s /bin/bash -g root -G sudo -u 1001 -p ${PASSWORD} ${USER}
WORKDIR ${WK_DIR}
# Installing Docker CE
RUN apt-get install -y apt-transport-https ca-certificates curl software-properties-common
RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
RUN add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
RUN apt-get update -y
RUN apt-get install -y docker-ce docker-ce-cli containerd.io
# Installing Kubectl
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.15.0/bin/linux/amd64/kubectl
RUN chmod +x ./kubectl
RUN mv ./kubectl /usr/local/bin/kubectl
# Installing Minikube
RUN curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
RUN install minikube-linux-amd64 /usr/local/bin/minikube
ENV PATH="${PATH}:${WK_DIR}"
COPY start.sh start.sh
CMD sh start.sh
With this, just to make the deployment easier, I also have a docker-compose.yaml that looks like this:
services:
kf-local:
build: .
volumes:
- path/to/folder:/usr/kubeflow
privileged: true
And start.sh looks like this:
service docker start
minikube start \
--extra-config=apiserver.service-account-issuer=api \
--extra-config=apiserver.service-account-signing-key-file=/var/lib/minikube/certs/apiserver.key \
--extra-config=apiserver.service-account-api-audiences=api \
--driver=docker
The problem is, whenever I try running this I get the error:
X Exiting due to DRV_AS_ROOT: The "docker" driver should not be used with root privileges.
I've tried creating a user and running it from there also but then I'm not being able to run sudo, any idea how I could install Kubernetes on a Docker container?
As you thought you are right in case of using VM and that be easy to test it out.
Instead of setting up Kubernetes on docker you can use Linux base container for development testing.
There is linux container available name as LXC container. Docker is kind of application container while in simple words LXC is like VM for local development testing. you can install the stuff into rather than docker setting up application inside image.
read some details about lxc : https://medium.com/#harsh.manvar111/lxc-vs-docker-lxc-101-bd49db95933a
you can also run it on windows and try it out at : https://linuxcontainers.org/
If you have read the documentation of Kubeflow there is also one option multipass
Multipass creates a Linux virtual machine on Windows, Mac or Linux
systems. The VM contains a complete Ubuntu operating system which can
then be used to deploy Kubernetes and Kubeflow.
Learn more about Multipass : https://multipass.run/#install
Insufficient user permissions on the docker groups and minikube directory cause this error ("X Exiting due to DRV_AS_ROOT: The "docker" driver should not be used with root privileges.").
You can fix that error by adding your user to the docker group and setting permissions to the minikube profile directory (change the $USER with your username in the two commands below):
sudo usermod -aG docker $USER && newgrp docker
sudo chown -R $USER $HOME/.minikube; chmod -R u+wrx $HOME/.minikube

Running systemd in docker container causes host crash

I'm trying to create a systemd based docker container, but when I try running the built container my system crashes. I think running init in the container might be causing the conflict, and is somehow conflicting with systemd on my host.
When I try to run the docker container I am logged out of my account and briefly see what looks like my system going through a boot process. My host is running Arch Linux, with linux 4.20.7.
It is only when I attempt to "boot" the container by running systemd via /sbin/init, that the problem occurs.
docker run -it \
--volume=/sys/fs/cgroup:/sys/fs/cgroup:rw \
--privileged 66304e3bc48
Dockerfile (adapted from solita/ubuntu-systemd):
FROM ubuntu:18.04
# Don't start any optional services.
RUN find /etc/systemd/system \
/lib/systemd/system \
-path '*.wants/*' \
-not -name '*journald*' \
-not -name '*systemd-tmpfiles*' \
-not -name '*systemd-user-sessions*' \
-exec rm \{} \;
RUN apt-get update && \
apt-get install --yes \
python sudo bash ca-certificates dbus systemd && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
RUN systemctl set-default multi-user.target
RUN systemctl mask dev-hugepages.mount sys-fs-fuse-connections.mount
STOPSIGNAL SIGRTMIN+3
# Workaround for docker/docker#27202, technique based on comments from docker/docker#9212
CMD ["/bin/bash", "-c", "exec /sbin/init --log-target=journal 3>&1"]
I would expect the container to just boot up running systemd, and I'm not your what I might be doing wrong.
Docker do not want to include by default Systemd inside docker because it publish itself as an application container (which means one application per container). There is an other type of containers called system containers. the most known are OpenVZ , LXC/LXD and Systemd-nspawn. all those will run a full OS with systemd as if it were a virtual machine.
and using systemd inside docker is so dangerous when compared to running systemd inside LXD
there is even a new baby called Podman which is a clone of docker but it uses by default systemd inside when you install it or when you use an image which contain systemd already like those of ubuntu cloud images http://uec-images.ubuntu.com/releases/server/bionic/release/
so my advice is to test LXD and systemd-nspawn ; and keep an eye on Podman it solves what docker do not want to solve; read this to understand https://lwn.net/Articles/676831/
references:
https://coreos.com/rkt/docs/latest/rkt-vs-other-projects.html
https://podman.io/slides/2018_10_01_Replacing_Docker_With_Podman.pdf
https://containerjournal.com/features/system-containers-vs-application-containers-difference-matter
Runtimes And the Curse of the Privileged Container
https://brauner.github.io/2019/02/12/privileged-containers.html
I ended up using the paulfantom/ubuntu-molecule Docker image.
Currently it looks like they're just installing systemd, setting some environment variables, and using the systemd binary directly as the entry point. It seems to work without the issues I mentioned in the original post.
Dockerfile
FROM ubuntu:18.04
ENV container docker
ENV LC_ALL C
ENV DEBIAN_FRONTEND noninteractive
RUN sed -i 's/# deb/deb/g' /etc/apt/sources.list
# hadolint ignore=DL3008
RUN apt-get update \
&& apt-get install -y --no-install-recommends systemd python sudo bash iproute2 net-tools \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# hadolint ignore=SC2010,SC2086
RUN cd /lib/systemd/system/sysinit.target.wants/ \
&& ls | grep -v systemd-tmpfiles-setup | xargs rm -f $1
RUN rm -f /lib/systemd/system/multi-user.target.wants/* \
/etc/systemd/system/*.wants/* \
/lib/systemd/system/local-fs.target.wants/* \
/lib/systemd/system/sockets.target.wants/*udev* \
/lib/systemd/system/sockets.target.wants/*initctl* \
/lib/systemd/system/basic.target.wants/* \
/lib/systemd/system/anaconda.target.wants/* \
/lib/systemd/system/plymouth* \
/lib/systemd/system/systemd-update-utmp*
RUN systemctl set-default multi-user.target
ENV init /lib/systemd/systemd
VOLUME [ "/sys/fs/cgroup" ]
ENTRYPOINT ["/lib/systemd/systemd"]
To "match the host as close as possible" was the original goal of the docker-systemctl-replacement script. You can test drive scripts in a container that may be run later on a virtual machine. It allows to do some systemctl commands without an active systemd daemon.
It can also serve as an init daemon if you wish to. A systemd-enabled operating system will feel quite similar inside a container with it.

Run firefox in docker container on raspberry pi

I've followed this thread
to build a docker container to run on Raspberry pi. I've managed to run it on a normal centos, but on pi I always get this error (I always run in mobaxterm application on windows 10 because it has x11 support):
Unable to init server: broadway display type not supported 'machinename:0'
Error: cannot open display: machinename:0
I've tried to build it with 2 different dockerfile, the 1st:
FROM resin/rpi-raspbian:latest
# Make sure the package repository is up to date
RUN apt-get update && apt-get install -y firefox-esr
USER root
ENV HOME /root
CMD /usr/bin/firefox
And tried to run it with this script:
#!/usr/bin/env bash
CONTAINER=firefox_wo_vnc
COMMAND=/bin/bash
DISPLAY="machinename:0"
USER=$(whoami)
docker run -ti --rm \
-e DISPLAY \
-v "/c/Users/$USER:/home/$USER:rw" \
$CONTAINER
$COMMAND
and I got the error :(
The 2nd that I've tried:
# Firefox over VNC
#
# VERSION 0.1
# DOCKER-VERSION 0.2
FROM resin/rpi-raspbian:latest
RUN apt-get update
# Install vnc, xvfb in order to create a 'fake' display and firefox
RUN apt-get install -y x11vnc xvfb firefox-esr
RUN mkdir ~/.vnc
# Setup a password
RUN x11vnc -storepasswd 1234 ~/.vnc/passwd
# Autostart firefox (might not be the best way to do it, but it does the trick)
RUN bash -c 'echo "firefox" >> /.bashrc'
And tried to run:
#!/usr/bin/env bash
CONTAINER=firefoxvnc
COMMAND=/bin/bash
DISPLAY="machinename:0"
USER=$(whoami)
docker run \
-it \
--rm \
--user=$USER \
--workdir="/home/$USER" \
-v "/c/Users/$USER:/home/$USER:rw" \
-e DISPLAY \
$CONTAINER \
$COMMAND
This way I can login via VNC but firefox is not running and I can't see actually anything just an empty desktop.
I've read many thread, I've installed xorg, openbox, x11 apps ...

Cannot start a docker container

This is my Dockerfile, which attempts to setup Nginx with Phusion Passenger, and then install Ruby.
# Build command: docker build --force-rm --tag="rails" .
# Run command: docker run -P -d rails
FROM ubuntu:14.04
ENV DEBIAN_FRONTEND noninteractive
# Install nginx and passenger
RUN sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 561F9B9CAC40B2F7
RUN sudo sudo apt-get install -yf apt-transport-https ca-certificates
RUN sudo echo 'deb https://oss-binaries.phusionpassenger.com/apt/passenger trusty main' > /etc/apt/sources.list.d/passenger.list
RUN sudo chown root: /etc/apt/sources.list.d/passenger.list
RUN sudo chmod 600 /etc/apt/sources.list.d/passenger.list
RUN sudo apt-get update
RUN sudo apt-get install -yf nginx-extras passenger
RUN sudo service nginx restart
# Install rbenv and ruby
RUN sudo apt-get install -yf git autoconf bison build-essential libssl-dev libyaml-dev libreadline6-dev zlib1g-dev libncurses5-dev libffi-dev libgdbm3 libgdbm-dev curl
RUN git clone https://github.com/sstephenson/rbenv.git ~/.rbenv
RUN git clone https://github.com/sstephenson/ruby-build.git ~/.rbenv/plugins/ruby-build
RUN echo 'export PATH="$HOME/.rbenv/bin:$PATH"' >> ~/.bashrc
RUN echo 'eval "$(rbenv init -)"' >> ~/.bashrc
RUN /root/.rbenv/plugins/ruby-build/install.sh
ENV PATH /root/.rbenv/bin:$PATH
RUN rbenv install 2.1.5
RUN rbenv global 2.1.5
EXPOSE 80
Basically, I can build the image just fine, but I cannot start a container from this image.
This is my first time using docker, so I suspect that I need to use the CMD directive, but I have no idea what command should go there.
I appreciate any help to point out what is wrong with my Dockerfile.
The container is running successfully, its just exiting immediately as you don't specify any process to run. A container will only run as long as its main process. As you've run it in the background (the -d flag) it won't provide any output, which is a bit confusing.
For example:
$ docker run ubuntu echo "Hello World"
Hello World
The container ran the command and exited as expected.
$ docker run -d ubuntu echo "Hello World"
efd8f9980c1c9489f72a576575cf57ec3c2961e312b981ad13a2118914732036
The same thing as happened, but as we ran with -d, we got the id of the container back rather than the output. We can get the output using the logs command:
$ docker logs efd8f9980c1c9489f72a576575cf57ec3c2961e312b981ad13a2118914732036
Hello World
What you need to do is start your rails app, or whatever process you want the container to run when you launch the container. You can either do this from the docker run command or using CMD statement in the Dockerfile. Note that the main process must stay in the foreground - if it forks to the background the container will exit.
If you want to get a shell in a container, start it with -it e.g:
$ docker run -it ubuntu /bin/bash
To be honest, I think you'd be better served by using an official image e.g. https://registry.hub.docker.com/_/rails/.
You need to use ENTRYPOINT to start what you want, check the official documentation.
See what the logs says by issuing
docker logs rails
or, as
docker ps -an 1
should show the last id, with the command
docker logs this_id

How to use sudo inside a docker container?

Normally, docker containers are run using the user root. I'd like to use a different user, which is no problem using docker's USER directive. But this user should be able to use sudo inside the container. This command is missing.
Here's a simple Dockerfile for this purpose:
FROM ubuntu:12.04
RUN useradd docker && echo "docker:docker" | chpasswd
RUN mkdir -p /home/docker && chown -R docker:docker /home/docker
USER docker
CMD /bin/bash
Running this container, I get logged in with user 'docker'. When I try to use sudo, the command isn't found. So I tried to install the sudo package inside my Dockerfile using
RUN apt-get install sudo
This results in Unable to locate package sudo
Just got it. As regan pointed out, I had to add the user to the sudoers group. But the main reason was I'd forgotten to update the repositories cache, so apt-get couldn't find the sudo package. It's working now. Here's the completed code:
FROM ubuntu:12.04
RUN apt-get update && \
apt-get -y install sudo
RUN useradd -m docker && echo "docker:docker" | chpasswd && adduser docker sudo
USER docker
CMD /bin/bash
When neither sudo nor apt-get is available in container, you can also jump into running container as root user using command
docker exec -u root -t -i container_id /bin/bash
The other answers didn't work for me. I kept searching and found a blog post that covered how a team was running non-root inside of a docker container.
Here's the TL;DR version:
RUN apt-get update \
&& apt-get install -y sudo
RUN adduser --disabled-password --gecos '' docker
RUN adduser docker sudo
RUN echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
USER docker
# this is where I was running into problems with the other approaches
RUN sudo apt-get update
I was using FROM node:9.3 for this, but I suspect that other similar container bases would work as well.
For anyone who has this issue with an already running container, and they don't necessarily want to rebuild, the following command connects to a running container with root privileges:
docker exec -ti -u root container_name bash
You can also connect using its ID, rather than its name, by finding it with:
docker ps -l
To save your changes so that they are still there when you next launch the container (or docker-compose cluster) - note that these changes would not be repeated if you rebuild from scratch:
docker commit container_id image_name
To roll back to a previous image version (warning: this deletes history rather than appends to the end, so to keep a reference to the current image, tag it first using the optional step):
docker history image_name
docker tag latest_image_id my_descriptive_tag_name # optional
docker tag desired_history_image_id image_name
To start a container that isn't running and connect as root:
docker run -ti -u root --entrypoint=/bin/bash image_id_or_name -s
To copy from a running container:
docker cp <containerId>:/file/path/within/container /host/path/target
To export a copy of the image:
docker save container | gzip > /dir/file.tar.gz
Which you can restore to another Docker install using:
gzcat /dir/file.tar.gz | docker load
It is much quicker but takes more space to not compress, using:
docker save container | dir/file.tar
And:
cat dir/file.tar | docker load
if you want to connect to container and install something
using apt-get
first as above answer from our brother "Tomáš Záluský"
docker exec -u root -t -i container_id /bin/bash
then try to
RUN apt-get update or apt-get 'anything you want'
it worked with me
hope it's useful for all
Unlike accepted answer, I use usermod instead.
Assume already logged-in as root in docker, and "fruit" is the new non-root username I want to add, simply run this commands:
apt update && apt install sudo
adduser fruit
usermod -aG sudo fruit
Remember to save image after update. Use docker ps to get current running docker's <CONTAINER ID> and <IMAGE>, then run docker commit -m "added sudo user" <CONTAINER ID> <IMAGE> to save docker image.
Then test with:
su fruit
sudo whoami
Or test by direct login(ensure save image first) as that non-root user when launch docker:
docker run -it --user fruit <IMAGE>
sudo whoami
You can use sudo -k to reset password prompt timestamp:
sudo whoami # No password prompt
sudo -k # Invalidates the user's cached credentials
sudo whoami # This will prompt for password
Here's how I setup a non-root user with the base image of ubuntu:18.04:
RUN \
groupadd -g 999 foo && useradd -u 999 -g foo -G sudo -m -s /bin/bash foo && \
sed -i /etc/sudoers -re 's/^%sudo.*/%sudo ALL=(ALL:ALL) NOPASSWD: ALL/g' && \
sed -i /etc/sudoers -re 's/^root.*/root ALL=(ALL:ALL) NOPASSWD: ALL/g' && \
sed -i /etc/sudoers -re 's/^#includedir.*/## **Removed the include directive** ##"/g' && \
echo "foo ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers && \
echo "Customized the sudoers file for passwordless access to the foo user!" && \
echo "foo user:"; su - foo -c id
What happens with the above code:
The user and group foo is created.
The user foo is added to the both the foo and sudo group.
The uid and gid is set to the value of 999.
The home directory is set to /home/foo.
The shell is set to /bin/bash.
The sed command does inline updates to the /etc/sudoers file to allow foo and root users passwordless access to the sudo group.
The sed command disables the #includedir directive that would allow any files in subdirectories to override these inline updates.
If SUDO or apt-get is not accessible inside the Container, You can use, below option in running container.
docker exec -u root -it f83b5c5bf413 ash
"f83b5c5bf413" is my container ID & here is working example from my terminal:
This may not work for all images, but some images contain a root user already, such as in the jupyterhub/singleuser image. With that image it's simply:
USER root
RUN sudo apt-get update
The main idea is that you need to create user that is a root user according to the container.
Main commands:
RUN echo "bot:bot" | chpasswd
RUN adduser bot sudo
the first sends the literal string bot:bot to chpasswd which creates the user bot with the password bot, chpasswd does:
The chpasswd command reads a list of user name and password pairs from standard input and uses this information to update a group of existing users. Each line is of the format:
user_name:password
By default the supplied password must be in clear-text, and is encrypted by chpasswd. Also the password age will be updated, if present.
The second command I assume adds the user bot as sudo.
Full docker container to play with:
FROM continuumio/miniconda3
# FROM --platform=linux/amd64 continuumio/miniconda3
MAINTAINER Brando Miranda "me#gmail.com"
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
ssh \
git \
m4 \
libgmp-dev \
opam \
wget \
ca-certificates \
rsync \
strace \
gcc \
rlwrap \
sudo
# https://github.com/giampaolo/psutil/pull/2103
RUN useradd -m bot
# format for chpasswd user_name:password
RUN echo "bot:bot" | chpasswd
RUN adduser bot sudo
WORKDIR /home/bot
USER bot
#CMD /bin/bash
If you have a container running as root that runs a script (which you can't change) that needs access to the sudo command, you can simply create a new sudo script in your $PATH that calls the passed command.
e.g. In your Dockerfile:
RUN if type sudo 2>/dev/null; then \
echo "The sudo command already exists... Skipping."; \
else \
echo -e "#!/bin/sh\n\${#}" > /usr/sbin/sudo; \
chmod +x /usr/sbin/sudo; \
fi
An example Dockerfile for Centos7. In this example we add prod_user with privilege of sudo.
FROM centos:7
RUN yum -y update && yum clean all
RUN yum -y install openssh-server python3 sudo
RUN adduser -m prod_user && \
echo "MyPass*49?" | passwd prod_user --stdin && \
usermod -aG wheel prod_user && \
mkdir /home/prod_user/.ssh && \
chown prod_user:prod_user -R /home/prod_user/ && \
chmod 700 /home/prod_user/.ssh
RUN echo "prod_user ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers && \
echo "%wheel ALL=(ALL) ALL" >> /etc/sudoers
RUN echo "PasswordAuthentication yes" >> /etc/ssh/sshd_config
RUN systemctl enable sshd.service
VOLUME [ "/sys/fs/cgroup" ]
ENTRYPOINT ["/usr/sbin/init"]
There is no answer on how to do this on CentOS.
On Centos, you can add following to Dockerfile
RUN echo "user ALL=(root) NOPASSWD:ALL" > /etc/sudoers.d/user && \
chmod 0440 /etc/sudoers.d/user
I'm using an Ubuntu image, while using the docker desktop had faced this issue.
The following resolved the issue:
apt-get update
apt-get install sudo

Resources