Add sudo permission (without password ) to user by command line - docker

I'm creating a docker file from ubuntu:bionic image.
I want an ubuntu user with sudo privileges.
This is my Dockerfile
FROM ubuntu:bionic
ENV DEBIAN_FRONTEND noninteractive
# Get the basic stuff
RUN apt-get update && \
apt-get -y upgrade && \
apt-get install -y \
sudo
# Create ubuntu user with sudo privileges
RUN useradd -ms /bin/bash ubuntu && \
usermod -aG sudo ubuntu
# Set as default user
USER ubuntu
WORKDIR /home/ubuntu
ENV DEBIAN_FRONTEND teletype
CMD ["/bin/bash"]
But with this aproach I need to write the password of ubuntu user.
There is a way to add NOPASSWD clausule to sudo group in sudoers file by command line?

First, you are not suggested to use sudo in docker. You could well design your behavior using USER + gosu.
But, if you insist for some uncontrolled reason, just add next line after you setup normal user:
RUN echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
So for your scenario, the workable one is:
FROM ubuntu:bionic
ENV DEBIAN_FRONTEND noninteractive
# Get the basic stuff
RUN apt-get update && \
apt-get -y upgrade && \
apt-get install -y \
sudo
# Create ubuntu user with sudo privileges
RUN useradd -ms /bin/bash ubuntu && \
usermod -aG sudo ubuntu
# New added for disable sudo password
RUN echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
# Set as default user
USER ubuntu
WORKDIR /home/ubuntu
ENV DEBIAN_FRONTEND teletype
CMD ["/bin/bash"]
Test the effect:
$ docker build -t abc:1 .
Sending build context to Docker daemon 2.048kB
Step 1/9 : FROM ubuntu:bionic
......
Successfully built b3aa0793765f
Successfully tagged abc:1
$ docker run --rm abc:1 cat /etc/sudoers
cat: /etc/sudoers: Permission denied
$ docker run --rm abc:1 sudo cat /etc/sudoers
#
# This file MUST be edited with the 'visudo' command as root.
#
# Please consider adding local content in /etc/sudoers.d/ instead of
# directly modifying this file.
#
# See the man page for details on how to write a sudoers file.
#
Defaults env_reset
......
#includedir /etc/sudoers.d
%sudo ALL=(ALL) NOPASSWD:ALL
You could see with sudo, we could already execute a root-needed command.

Related

Permissions in Docker volume

I am struggling with permissions on docker volume, I get access denied for writing.
This is a small part of my docker file
FROM ubuntu:18.04
RUN apt-get update && \
apt-get install -y \
apt-transport-https \
build-essential \
ca-certificates \
curl \
vim && \............
RUN curl -sL https://deb.nodesource.com/setup_8.x | bash - && apt-get install -y nodejs
# Add non-root user
ARG USER=user01
RUN useradd -Um -d /home/$USER -s /bin/bash $USER && \
apt install -y python3-pip && \
pip3 install qrcode[pil]
#Copy that startup.sh into the scripts folder
COPY /scripts/startup.sh /scripts/startup.sh
#Making the startup.sh executable
RUN chmod -v +x /scripts/startup.sh
#Copy node API files
COPY --chown=user1 /node_api/* /home/user1/
USER $USER
WORKDIR /home/$USER
# Expose needed ports
EXPOSE 3000
VOLUME /data_storage
ENTRYPOINT [ "/scripts/startup.sh" ]
Also a small part of my startup.sh
#!/bin/bash
/usr/share/lib/provision.py --enterprise-seed $ENTERPRISE_SEED > config.json
Then my docker builds command:
sudo docker build -t mycontainer .
And the docker run command:
sudo docker run -v data_storage:/home/user01/.client -p 3008:3000 -itd mycontainer
The problem I have is that the Python script will create the folder: /home/user01/.client and it will copy some files in there. That always worked fine. But now I want those files, which are data files, in a volume for backup porpuses. And as I am mapping with my volume I get permissions denied, so the python script is not able to write anymore.
So at the end of my dockerfile this instructions combined with the mapping in the docker run command give me the permission denied:
VOLUME /data_storage
Any suggestions on how to resolve this? some more permissions needed for the "user01"?
Thanks
I was able to resolve my issue by removing the "volume" command from the dockerfile and just doing the mapping at the moment of executing the docker run:
sudo docker run -v data_storage:/home/user01/.client -p 3008:3000 -itd mycontainer

How to troubleshoot error coming from a container

I'm experimenting for the first time to try to create a docker container to run ROS. I am getting a confusing error and I cant figure out how to trouble
bash-3.2$ docker run -ti --name turtlebot3 rosdocker To run a command
as administrator (user "root"), use "sudo <command>". See "man
sudo_root" for details.
bash: /home/ros/catkin_ws/devel/setup.bash: No such file or directory
I am creating rosdocker with this dockerfile, from inside vscode. I am using the Docker plugin and using the "Build Image" command. Here's the Dockerfile:
FROM ros:kinetic-robot-xenial
RUN apt-get update && apt-get install --assume-yes \sudo \
python-pip \
ros-kinetic-desktop-full \
ros-kinetic-turtlebot3 \
ros-kinetic-turtlebot3-bringup \
ros-kinetic-turtlebot3-description \
ros-kinetic-turtlebot3-fake \
ros-kinetic-turtlebot3-gazebo \
ros-kinetic-turtlebot3-msgs \
ros-kinetic-turtlebot3-navigation \
ros-kinetic-turtlebot3-simulations \
ros-kinetic-turtlebot3-slam \
ros-kinetic-turtlebot3-teleop
# install python packages
RUN pip install -U scikit-learn numpy scipy
RUN pip install --upgrade pip
# create non-root user
ENV USERNAME ros
RUN adduser --ingroup sudo --disabled-password --gecos "" --shell /bin/bash --home /home/$USERNAME $USERNAME
RUN bash -c 'echo $USERNAME:ros | chpasswd'
ENV HOME /home/$USERNAME
USER $USERNAME
# create catkin_ws
RUN mkdir /home/$USERNAME/catkin_ws
WORKDIR /home/$USERNAME/catkin_ws
# add catkin env
RUN echo 'source /opt/ros/kinetic/setup.bash' >> /home/$USERNAME/.bashrc
RUN echo 'source /home/$USERNAME/catkin_ws/devel/setup.bash' >> /home/$USERNAME/.bashrc
I am not sure where the error is coming from and I don't know how to debug or troubleshoot it. I would appreciate any pointers!
You are creating an user ros and then in the last line doing this:
RUN echo 'source /home/$USERNAME/catkin_ws/devel/setup.bash' >> /home/$USERNAME/.bashrc
So obviously, system will look for "/home/ros/catkin_ws/devel/setup.bash" which is not created any where inside docker file.
Either create this file or if you are planning to mount from host to docker, then run with
docker run -ti --name turtlebot3 rosdocker -v sourcevolume:destinationvolume

Errors when running Chromium browser inside docker container

I have created a docker image to run the Chromium browser. It works well and I've been able to track down a solution to nearly every issue that has popped up. However, there are a couple of errors that display in the terminal that I can't seem to find a solution for.
The first error is:
[1:1:0329/015547.694703:ERROR:gpu_process_transport_factory.cc(1019)] Lost UI shared context.
The other is:
[1:216:0329/015547.823867:ERROR:bus.cc(394)] Failed to connect to the bus: Could not parse server address: Unknown address type (examples of valid types are "tcp" and on UNIX "unix")
I haven't experienced any problems (yet) as far as functionality is concerned, but I can't handle the not-knowing.
Host: CentOS 7
Dockerfile:
FROM ubuntu:16.04
COPY entrypoint.sh /sbin/entrypoint.sh
RUN chmod 755 /sbin/entrypoint.sh
ENTRYPOINT ["/sbin/entrypoint.sh"]
RUN apt-get update -y
RUN apt-get install packagekit-gtk3-module -y
RUN apt-get install libcanberra-gtk* -y
RUN apt-get install chromium-browser -y
RUN apt-get install xauth -y
RUN apt-get upgrade -y
RUN apt-get autoremove &&\
apt-get clean &&\
rm -rf /tmp/*
Entrypoint script:
#!/bin/bash
# Uses an envirnoment variable passed in at runtime by run_chromium.sh to add
username
# that matches the host; if the account already exists, the script exits
and reminds the user
# to comment out a section of run_gscan.sh
useradd -m ${NEW_USER}
if [[ "${?}" -ne 0 ]]
then
echo "Account already created; starting gscan2pdf container"
echo "If you have not already done so: "
echo "Please comment out the indicated section in the
'gscan2pdf_run.sh' script"
exit 0
fi
# If the host user's username was not already present, the following code
becomes reachable
# and the follwing code adds the new user as a sudoer, as well as
matching the UID and GID in the
# image to that of the user's account on the host machine; this is
necessary for the method of
# accessing the host's XServer to work properly
echo "${NEW_USER}:${NEW_USER}" | chpasswd && \
usermod --shell /bin/bash ${NEW_USER} && \
usermod -aG sudo ${NEW_USER} && \
mkdir /etc/sudoers.d && \
touch /etc/sudoers.d/${NEW_USER} && \
echo "${NEW_USER} ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers.d/${NEW_USER}
&& \
chmod 0440 /etc/sudoers.d/${NEW_USER} && \
usermod --uid "${NEW_UID}" ${NEW_USER} && \
groupmod --gid "${NEW_GID}" ${NEW_USER}
# If the above code was reachable- because a matching user account was
not present at runtime -the
# user is instructed to comment out a section of the run_gscan.sh file
before the next run
echo "Account has been created to sync acces to the host's XServer."
echo "Please comment out the indicated section in the 'run_gscan.sh'
script"
Finally, the script that's used to run a container:
#!/bin/bash
########################################################
# The following variables will be passed to the container at runtime:
# the first two variables are used by the entrypoint.sh to create a
matching user account in the image
######################################################
HOST_UID=$(id -u)
HOST_GID=$(id -g)
#########################################################
# The next two are used to expose the unix socket in the tmp directory
and an as-of-yet uncreated xauth
# file in the container; since the tmp directory is not static, this is a
more secure approach
###########################################################
XSOCK=/tmp/.X11-unix &&
XAUTH=/tmp/.docker.xauth &&
############################################################
# This creates the xauth file in the tmp directory mentioned above; then,
a series of piped commands
# passes a numeric-format of authorization entry for the specified
display- :0 here -of to the sed
# stream editor, then to the new Xauth file created by touch which uses
nmerge to merge the numeric-format
# authorization entry to the newly created file the running container
will use t access the Xserver and
# dispay the GUI
##########################################################
touch $XAUTH &&
xauth nlist :0 | sed -e 's/^..../ffff/' | xauth -f $XAUTH nmerge - &&
# Comment out this section after first run
##########################################################
#docker run -e NEW_USER="${USER}" -e NEW_UID="${HOST_UID}" -e
#NEW_GID="${HOST_GID}" hildy:chromium
#LAST_CONTAINER=$(docker ps -lq) &&
#docker commit "${LAST_CONTAINER}" hildy:chromium
##########################################################
##########################################################
# This is the command that will be run after the user account is created
above; not that the entrypoint
# script- and ipso facto the default CMD in the image -ae overridden at
runtime and the applcation is
# launched instead
########################################################
docker run \
-ti \
--user $USER \
--privileged \
-v /dev/snd:/dev/snd \
-v /var/run/dbus:/var/run/dbus \
-v $XAUTH:$XAUTH -v $XSOCK:$XSOCK \
-e XAUTHORITY=$XAUTH -e DISPLAY \
--entrypoint "" hildy:chromium chromium-browser --disable-gpu
You could try starting Chrome using its SwiftShader software renderer instead of the --disable-gpu option:
chromium-browser --use-gl=swiftshader

How to use sudo inside a docker container?

Normally, docker containers are run using the user root. I'd like to use a different user, which is no problem using docker's USER directive. But this user should be able to use sudo inside the container. This command is missing.
Here's a simple Dockerfile for this purpose:
FROM ubuntu:12.04
RUN useradd docker && echo "docker:docker" | chpasswd
RUN mkdir -p /home/docker && chown -R docker:docker /home/docker
USER docker
CMD /bin/bash
Running this container, I get logged in with user 'docker'. When I try to use sudo, the command isn't found. So I tried to install the sudo package inside my Dockerfile using
RUN apt-get install sudo
This results in Unable to locate package sudo
Just got it. As regan pointed out, I had to add the user to the sudoers group. But the main reason was I'd forgotten to update the repositories cache, so apt-get couldn't find the sudo package. It's working now. Here's the completed code:
FROM ubuntu:12.04
RUN apt-get update && \
apt-get -y install sudo
RUN useradd -m docker && echo "docker:docker" | chpasswd && adduser docker sudo
USER docker
CMD /bin/bash
When neither sudo nor apt-get is available in container, you can also jump into running container as root user using command
docker exec -u root -t -i container_id /bin/bash
The other answers didn't work for me. I kept searching and found a blog post that covered how a team was running non-root inside of a docker container.
Here's the TL;DR version:
RUN apt-get update \
&& apt-get install -y sudo
RUN adduser --disabled-password --gecos '' docker
RUN adduser docker sudo
RUN echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
USER docker
# this is where I was running into problems with the other approaches
RUN sudo apt-get update
I was using FROM node:9.3 for this, but I suspect that other similar container bases would work as well.
For anyone who has this issue with an already running container, and they don't necessarily want to rebuild, the following command connects to a running container with root privileges:
docker exec -ti -u root container_name bash
You can also connect using its ID, rather than its name, by finding it with:
docker ps -l
To save your changes so that they are still there when you next launch the container (or docker-compose cluster) - note that these changes would not be repeated if you rebuild from scratch:
docker commit container_id image_name
To roll back to a previous image version (warning: this deletes history rather than appends to the end, so to keep a reference to the current image, tag it first using the optional step):
docker history image_name
docker tag latest_image_id my_descriptive_tag_name # optional
docker tag desired_history_image_id image_name
To start a container that isn't running and connect as root:
docker run -ti -u root --entrypoint=/bin/bash image_id_or_name -s
To copy from a running container:
docker cp <containerId>:/file/path/within/container /host/path/target
To export a copy of the image:
docker save container | gzip > /dir/file.tar.gz
Which you can restore to another Docker install using:
gzcat /dir/file.tar.gz | docker load
It is much quicker but takes more space to not compress, using:
docker save container | dir/file.tar
And:
cat dir/file.tar | docker load
if you want to connect to container and install something
using apt-get
first as above answer from our brother "Tomáš Záluský"
docker exec -u root -t -i container_id /bin/bash
then try to
RUN apt-get update or apt-get 'anything you want'
it worked with me
hope it's useful for all
Unlike accepted answer, I use usermod instead.
Assume already logged-in as root in docker, and "fruit" is the new non-root username I want to add, simply run this commands:
apt update && apt install sudo
adduser fruit
usermod -aG sudo fruit
Remember to save image after update. Use docker ps to get current running docker's <CONTAINER ID> and <IMAGE>, then run docker commit -m "added sudo user" <CONTAINER ID> <IMAGE> to save docker image.
Then test with:
su fruit
sudo whoami
Or test by direct login(ensure save image first) as that non-root user when launch docker:
docker run -it --user fruit <IMAGE>
sudo whoami
You can use sudo -k to reset password prompt timestamp:
sudo whoami # No password prompt
sudo -k # Invalidates the user's cached credentials
sudo whoami # This will prompt for password
Here's how I setup a non-root user with the base image of ubuntu:18.04:
RUN \
groupadd -g 999 foo && useradd -u 999 -g foo -G sudo -m -s /bin/bash foo && \
sed -i /etc/sudoers -re 's/^%sudo.*/%sudo ALL=(ALL:ALL) NOPASSWD: ALL/g' && \
sed -i /etc/sudoers -re 's/^root.*/root ALL=(ALL:ALL) NOPASSWD: ALL/g' && \
sed -i /etc/sudoers -re 's/^#includedir.*/## **Removed the include directive** ##"/g' && \
echo "foo ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers && \
echo "Customized the sudoers file for passwordless access to the foo user!" && \
echo "foo user:"; su - foo -c id
What happens with the above code:
The user and group foo is created.
The user foo is added to the both the foo and sudo group.
The uid and gid is set to the value of 999.
The home directory is set to /home/foo.
The shell is set to /bin/bash.
The sed command does inline updates to the /etc/sudoers file to allow foo and root users passwordless access to the sudo group.
The sed command disables the #includedir directive that would allow any files in subdirectories to override these inline updates.
If SUDO or apt-get is not accessible inside the Container, You can use, below option in running container.
docker exec -u root -it f83b5c5bf413 ash
"f83b5c5bf413" is my container ID & here is working example from my terminal:
This may not work for all images, but some images contain a root user already, such as in the jupyterhub/singleuser image. With that image it's simply:
USER root
RUN sudo apt-get update
The main idea is that you need to create user that is a root user according to the container.
Main commands:
RUN echo "bot:bot" | chpasswd
RUN adduser bot sudo
the first sends the literal string bot:bot to chpasswd which creates the user bot with the password bot, chpasswd does:
The chpasswd command reads a list of user name and password pairs from standard input and uses this information to update a group of existing users. Each line is of the format:
user_name:password
By default the supplied password must be in clear-text, and is encrypted by chpasswd. Also the password age will be updated, if present.
The second command I assume adds the user bot as sudo.
Full docker container to play with:
FROM continuumio/miniconda3
# FROM --platform=linux/amd64 continuumio/miniconda3
MAINTAINER Brando Miranda "me#gmail.com"
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
ssh \
git \
m4 \
libgmp-dev \
opam \
wget \
ca-certificates \
rsync \
strace \
gcc \
rlwrap \
sudo
# https://github.com/giampaolo/psutil/pull/2103
RUN useradd -m bot
# format for chpasswd user_name:password
RUN echo "bot:bot" | chpasswd
RUN adduser bot sudo
WORKDIR /home/bot
USER bot
#CMD /bin/bash
If you have a container running as root that runs a script (which you can't change) that needs access to the sudo command, you can simply create a new sudo script in your $PATH that calls the passed command.
e.g. In your Dockerfile:
RUN if type sudo 2>/dev/null; then \
echo "The sudo command already exists... Skipping."; \
else \
echo -e "#!/bin/sh\n\${#}" > /usr/sbin/sudo; \
chmod +x /usr/sbin/sudo; \
fi
An example Dockerfile for Centos7. In this example we add prod_user with privilege of sudo.
FROM centos:7
RUN yum -y update && yum clean all
RUN yum -y install openssh-server python3 sudo
RUN adduser -m prod_user && \
echo "MyPass*49?" | passwd prod_user --stdin && \
usermod -aG wheel prod_user && \
mkdir /home/prod_user/.ssh && \
chown prod_user:prod_user -R /home/prod_user/ && \
chmod 700 /home/prod_user/.ssh
RUN echo "prod_user ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers && \
echo "%wheel ALL=(ALL) ALL" >> /etc/sudoers
RUN echo "PasswordAuthentication yes" >> /etc/ssh/sshd_config
RUN systemctl enable sshd.service
VOLUME [ "/sys/fs/cgroup" ]
ENTRYPOINT ["/usr/sbin/init"]
There is no answer on how to do this on CentOS.
On Centos, you can add following to Dockerfile
RUN echo "user ALL=(root) NOPASSWD:ALL" > /etc/sudoers.d/user && \
chmod 0440 /etc/sudoers.d/user
I'm using an Ubuntu image, while using the docker desktop had faced this issue.
The following resolved the issue:
apt-get update
apt-get install sudo

Start sshd automatically with docker container

Given:
container based on ubuntu:13.10
installed ssh (via apt-get install ssh)
Problem: each when I start container I have to run sshd manually service ssh start
Tried: update-rc.d ssh defaults, but it does not helps.
Question: how to setup container to start sshd service automatically during container start?
Just try:
ENTRYPOINT service ssh restart && bash
in your dockerfile, it works fun for me!
more details here: How to automatically start a service when running a docker container?
Here is a Dockerfile which installs ssh server and runs it:
# Build Ubuntu image with base functionality.
FROM ubuntu:focal AS ubuntu-base
ENV DEBIAN_FRONTEND noninteractive
SHELL ["/bin/bash", "-o", "pipefail", "-c"]
# Setup the default user.
RUN useradd -rm -d /home/ubuntu -s /bin/bash -g root -G sudo ubuntu
RUN echo 'ubuntu:ubuntu' | chpasswd
USER ubuntu
WORKDIR /home/ubuntu
# Build image with Python and SSHD.
FROM ubuntu-base AS ubuntu-with-sshd
USER root
# Install required tools.
RUN apt-get -qq update \
&& apt-get -qq --no-install-recommends install vim-tiny=2:8.1.* \
&& apt-get -qq --no-install-recommends install sudo=1.8.* \
&& apt-get -qq --no-install-recommends install python3-pip=20.0.* \
&& apt-get -qq --no-install-recommends install openssh-server=1:8.* \
&& apt-get -qq clean \
&& rm -rf /var/lib/apt/lists/*
# Configure SSHD.
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
RUN mkdir /var/run/sshd
RUN bash -c 'install -m755 <(printf "#!/bin/sh\nexit 0") /usr/sbin/policy-rc.d'
RUN ex +'%s/^#\zeListenAddress/\1/g' -scwq /etc/ssh/sshd_config
RUN ex +'%s/^#\zeHostKey .*ssh_host_.*_key/\1/g' -scwq /etc/ssh/sshd_config
RUN RUNLEVEL=1 dpkg-reconfigure openssh-server
RUN ssh-keygen -A -v
RUN update-rc.d ssh defaults
# Configure sudo.
RUN ex +"%s/^%sudo.*$/%sudo ALL=(ALL:ALL) NOPASSWD:ALL/g" -scwq! /etc/sudoers
# Generate and configure user keys.
USER ubuntu
RUN ssh-keygen -t ed25519 -f ~/.ssh/id_ed25519
#COPY --chown=ubuntu:root "./files/authorized_keys" /home/ubuntu/.ssh/authorized_keys
# Setup default command and/or parameters.
EXPOSE 22
CMD ["/usr/bin/sudo", "/usr/sbin/sshd", "-D", "-o", "ListenAddress=0.0.0.0"]
Build with the following command:
docker build --target ubuntu-with-sshd -t ubuntu-with-sshd .
Then run with:
docker run -p 2222:22 ubuntu-with-sshd
To connect to container via local port, run: ssh -v localhost -p 2222.
To check for container IP address, use docker ps and docker inspect.
Here is example of docker-compose.yml file:
---
version: '3.4'
services:
ubuntu-with-sshd:
image: "ubuntu-with-sshd:latest"
build:
context: "."
target: "ubuntu-with-sshd"
networks:
mynet:
ipv4_address: 172.16.128.2
ports:
- "2222:22"
privileged: true # Required for /usr/sbin/init
networks:
mynet:
ipam:
config:
- subnet: 172.16.128.0/24
To run, type:
docker-compose up --build
I think the correct way to do it would follow docker's instructions to dockerizing the ssh service.
And in correlation to the specific question, the following lines added at the end of the dockerfile will achieve what you were looking for:
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
Dockerize a SSHD service
I have created dockerfiler to run ssh inside. I think it is not secure, but for testing/development in DMZ it could be ok:
FROM ubuntu:20.04
USER root
# change root password to `ubuntu`
RUN echo 'root:ubuntu' | chpasswd
ENV DEBIAN_FRONTEND noninteractive
# install ssh server
RUN apt-get update && apt-get install -y \
openssh-server sudo \
&& rm -rf /var/lib/apt/lists/*
# workdir for ssh
RUN mkdir -p /run/sshd
# generate server keys
RUN ssh-keygen -A
# allow root to login
RUN sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/g' /etc/ssh/sshd_config
EXPOSE 22
# run ssh server
CMD ["/usr/sbin/sshd", "-D", "-o", "ListenAddress=0.0.0.0"]
You can start ssh server when starting your container probably. Something like this:
docker run ubuntu /usr/sbin/sshd -D
Check out this official tutorial.
This is what I did:
FROM nginx
# install gosu
# seealso:
# https://docs.docker.com/develop/develop-images/dockerfile_best-practices/
# https://github.com/tianon/gosu/blob/master/INSTALL.md
# https://github.com/tianon/gosu
RUN set -eux; \
apt-get update; \
apt-get install -y gosu; \
rm -rf /var/lib/apt/lists/*; \
# verify that the binary works
gosu nobody true
ENV myenv='default'
RUN apt-get update && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
COPY entrypoint.sh /entrypoint.sh
ENV AIRFLOW_HOME=/usr/local/airflow
RUN mkdir $AIRFLOW_HOME
RUN groupadd --gid 8080 airflow
RUN useradd --uid 8080 --gid 8080 -ms /bin/bash -d $AIRFLOW_HOME airflow
RUN echo 'airflow:mypass' | chpasswd
EXPOSE 22
CMD ["/entrypoint.sh"]
Inside entrypoint.sh:
echo "starting ssh as root"
gosu root service ssh start &
#gosu root /usr/sbin/sshd -D &
echo "starting tail user"
exec gosu airflow tail -f /dev/null
Well, I used the following command to solve that
docker run -i -t mycentos6 /bin/bash -c '/etc/init.d/sshd start && /bin/bash'
First login to your container and write an initialization script /bin/init as following:
# execute in the container
cat <<EOT >> /bin/init
#!/bin/bash
service ssh start
while true; do sleep 1; done
EOT
Then make the root user is permitted to logging via ssh:
# execute in the container
echo "PermitRootLogin yes" >> /etc/ssh/sshd_config
Commit the container to a new image after exiting from the container:
# execute in the server
docker commit <YOUR_CONTAINER> <ANY_REPO>:<ANY_TAG>
From now on, as long as you run your container with the following command, the ssh service will be automatically started.
# execute in the server
docker run -it -d --name <NAME> <REPO>:<TAG> /bin/init
docker exec -it <NAME> /bin/bash
Done.
You can try a more elegant way to do that with phusion/baseimage-docker
https://github.com/phusion/baseimage-docker#readme

Resources