Using sudo inside non-priviledged docker container not working - docker

I don't want to be root inside a docker container.
But I have to modify some files which belong to root in a script.
I want to use sudo for this.
This is my docker file:
FROM ubuntu:16.04
RUN apt-get update
RUN apt-get install -y curl wget python openssh-server sudo
RUN mkdir /grader
RUN mkdir /grader/week1
RUN mkdir /grader/week1/assignment2
ADD executeGrader.sh /grader/
RUN groupadd -g 1000 coursera
RUN useradd -g 1000 -u 1000 --shell /bin/bash coursera
RUN usermod -a -G sudo coursera
RUN mkdir /home/coursera
RUN chown coursera:coursera /home/coursera
RUN echo "StrictHostKeyChecking no" >> /etc/ssh/ssh_config
RUN echo "coursera ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
RUN chmod 777 /etc/hostname
USER coursera
EXPOSE 8080
EXPOSE 8081
ENTRYPOINT ["/grader/executeGrader.sh"]
executeGrader.sh contains this one:
#!/bin/bash
id
sudo -u root -H bash -c "hostname localhost"
But I get this one :/
>>docker run -h sdfsdfsdf323 -u 1000:1000 -P stackoverflow
uid=1000(coursera) gid=1000(coursera) groups=1000(coursera)
hostname: you must be root to change the host name
Any ideas?

Thanks for all your support, this one was finally working for me:
export temphostname=`hostname`
sudo su -c "echo 127.0.0.1 $temphostname >> /etc/hosts"

Related

crontab non executed in docker

I need to execute crontab inside docker container, so I created the following dockerfile:
FROM openjdk:11-oraclelinux8
RUN mkdir -p /opt/my-user/
RUN mkdir -p /opt/my-user/joblogs
RUN groupadd my-user && adduser my-user -g my-user
RUN chown -R my-user:my-user /opt/my-user/
RUN microdnf install yum
RUN yum -y update
RUN yum -y install cronie
RUN yum -y install vi
RUN yum -y install telnet
COPY talend /opt/my-user/
COPY entrypoint.sh /opt/my-user/
RUN chmod +x /opt/my-user/entrypoint.sh
RUN chmod +x /opt/my-user/ETLJob/ETLJob_run.sh
RUN chown -R my-user:my-user /opt/my-user/
RUN echo "*/2 * * * * /bin/sh /opt/my-user/ETLJob/ETLJob_run.sh >> /opt/my-user/joblogs/job.log 2>&1" >> /etc/cron.d/my-user-job
RUN chmod 0644 /etc/cron.d/my-user-job
RUN crontab -u my-user /etc/cron.d/my-user-job
RUN chmod u+s /usr/sbin/crond
USER my-user:my-user
ENTRYPOINT [ "/opt/my-user/entrypoint.sh" ]
My entrypoint.sh file is the following one:
#!/bin/bash
echo "Start cron"
crontab /etc/cron.d/diomedee-job
echo "cron started"
# Run forever
tail -f /dev/null
So far so good, the container is created successfully and when I go inside the container and I type crontab -l I see the crontab... but it is never executed
I can't figure out what I'm missing; any research I made didn't give me any clue
May you give me any tip?
Docker containers usually only host a single process. In your case, the tail process. The cron daemon isn't running.
Your comment 'cron started' seems to indicate that running crontab starts the daemon, but it doesn't.
Replace your tail -f /dev/null command with crond -f to run the cron daemon in the foreground and it should work.

permission denied docker localhost

i just started to learn docker...
and i faced this issue, of building image from dockerfile, run a container and trying to access to it!
so when i try to login localhost via ssh -p 12000 root#localhost,
it keeps saying permission denied even when i put abcd for password
FROM ubuntu:20.04
RUN apt update && apt -y upgrade
RUN apt install -y openssh-server
RUN apt-get install -y gcc
RUN mkdir /var/run/sshd
RUN echo 'root:abcd' | chpasswd
RUN sed -i 's/#*PermitRootLogin prohibit-password/PermitRootLogin yes/g' /etc/ssh/sshd_config
RUN sed -i 's#session\s*s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' /etc/pam.d/sshd
ENV NOTVISIBLE="in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
COPY hw.c /root
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
WORKDIR /root
RUN gcc -o root hw.c
The best way to ssh to a container is by running this commands (this is for your ubuntu container)
docker exec -ti <container_id> bash
the container_id you can get it running docker ps if you didn't setup a fix name
Then you can remove all this lines
RUN mkdir /var/run/sshd
RUN echo 'root:abcd' | chpasswd
RUN sed -i 's/#*PermitRootLogin prohibit-password/PermitRootLogin yes/g' /etc/ssh/sshd_config
RUN sed -i 's#session\s*s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' /etc/pam.d/sshd
ENV NOTVISIBLE="in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
COPY hw.c /root
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
Remember also that everything you do by ssh on the container will be lost after the container is killed, so always better to add everything on the Dockerfile
i fixed it by deleting all remaining containers!

Start docker container under non-root user

my dockerfile
FROM openjdk:11-jdk
RUN apt-get update \
&& apt-get install --no-install-recommends -y git openssh-server \
&& rm -rf /var/lib/apt/lists/*
RUN groupadd --gid 3000 jenkins \
&& useradd --uid 3000 --gid jenkins --shell /bin/bash --create-home jenkins
RUN mkdir -p /var/run/sshd
EXPOSE 22
ENTRYPOINT /usr/sbin/sshd -D && bash
docker build --tag sample .
i tried to start it with jenkins user
docker run -u 3000:3000 sample
which returns
Could not load host key: /etc/ssh/ssh_host_rsa_key
Could not load host key: /etc/ssh/ssh_host_ecdsa_key
Could not load host key: /etc/ssh/ssh_host_ed25519_key
i've read all similar questions on stackoverflow and nothing works in my case.
was tried
RUN yes 'y' | ssh-keygen -b 1024 -t rsa -f /etc/ssh/all_needed_keys -N ''
also doesn't work
RUN /usr/bin/ssh-keygen -A

How to use sudo inside a docker container?

Normally, docker containers are run using the user root. I'd like to use a different user, which is no problem using docker's USER directive. But this user should be able to use sudo inside the container. This command is missing.
Here's a simple Dockerfile for this purpose:
FROM ubuntu:12.04
RUN useradd docker && echo "docker:docker" | chpasswd
RUN mkdir -p /home/docker && chown -R docker:docker /home/docker
USER docker
CMD /bin/bash
Running this container, I get logged in with user 'docker'. When I try to use sudo, the command isn't found. So I tried to install the sudo package inside my Dockerfile using
RUN apt-get install sudo
This results in Unable to locate package sudo
Just got it. As regan pointed out, I had to add the user to the sudoers group. But the main reason was I'd forgotten to update the repositories cache, so apt-get couldn't find the sudo package. It's working now. Here's the completed code:
FROM ubuntu:12.04
RUN apt-get update && \
apt-get -y install sudo
RUN useradd -m docker && echo "docker:docker" | chpasswd && adduser docker sudo
USER docker
CMD /bin/bash
When neither sudo nor apt-get is available in container, you can also jump into running container as root user using command
docker exec -u root -t -i container_id /bin/bash
The other answers didn't work for me. I kept searching and found a blog post that covered how a team was running non-root inside of a docker container.
Here's the TL;DR version:
RUN apt-get update \
&& apt-get install -y sudo
RUN adduser --disabled-password --gecos '' docker
RUN adduser docker sudo
RUN echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
USER docker
# this is where I was running into problems with the other approaches
RUN sudo apt-get update
I was using FROM node:9.3 for this, but I suspect that other similar container bases would work as well.
For anyone who has this issue with an already running container, and they don't necessarily want to rebuild, the following command connects to a running container with root privileges:
docker exec -ti -u root container_name bash
You can also connect using its ID, rather than its name, by finding it with:
docker ps -l
To save your changes so that they are still there when you next launch the container (or docker-compose cluster) - note that these changes would not be repeated if you rebuild from scratch:
docker commit container_id image_name
To roll back to a previous image version (warning: this deletes history rather than appends to the end, so to keep a reference to the current image, tag it first using the optional step):
docker history image_name
docker tag latest_image_id my_descriptive_tag_name # optional
docker tag desired_history_image_id image_name
To start a container that isn't running and connect as root:
docker run -ti -u root --entrypoint=/bin/bash image_id_or_name -s
To copy from a running container:
docker cp <containerId>:/file/path/within/container /host/path/target
To export a copy of the image:
docker save container | gzip > /dir/file.tar.gz
Which you can restore to another Docker install using:
gzcat /dir/file.tar.gz | docker load
It is much quicker but takes more space to not compress, using:
docker save container | dir/file.tar
And:
cat dir/file.tar | docker load
if you want to connect to container and install something
using apt-get
first as above answer from our brother "Tomáš Záluský"
docker exec -u root -t -i container_id /bin/bash
then try to
RUN apt-get update or apt-get 'anything you want'
it worked with me
hope it's useful for all
Unlike accepted answer, I use usermod instead.
Assume already logged-in as root in docker, and "fruit" is the new non-root username I want to add, simply run this commands:
apt update && apt install sudo
adduser fruit
usermod -aG sudo fruit
Remember to save image after update. Use docker ps to get current running docker's <CONTAINER ID> and <IMAGE>, then run docker commit -m "added sudo user" <CONTAINER ID> <IMAGE> to save docker image.
Then test with:
su fruit
sudo whoami
Or test by direct login(ensure save image first) as that non-root user when launch docker:
docker run -it --user fruit <IMAGE>
sudo whoami
You can use sudo -k to reset password prompt timestamp:
sudo whoami # No password prompt
sudo -k # Invalidates the user's cached credentials
sudo whoami # This will prompt for password
Here's how I setup a non-root user with the base image of ubuntu:18.04:
RUN \
groupadd -g 999 foo && useradd -u 999 -g foo -G sudo -m -s /bin/bash foo && \
sed -i /etc/sudoers -re 's/^%sudo.*/%sudo ALL=(ALL:ALL) NOPASSWD: ALL/g' && \
sed -i /etc/sudoers -re 's/^root.*/root ALL=(ALL:ALL) NOPASSWD: ALL/g' && \
sed -i /etc/sudoers -re 's/^#includedir.*/## **Removed the include directive** ##"/g' && \
echo "foo ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers && \
echo "Customized the sudoers file for passwordless access to the foo user!" && \
echo "foo user:"; su - foo -c id
What happens with the above code:
The user and group foo is created.
The user foo is added to the both the foo and sudo group.
The uid and gid is set to the value of 999.
The home directory is set to /home/foo.
The shell is set to /bin/bash.
The sed command does inline updates to the /etc/sudoers file to allow foo and root users passwordless access to the sudo group.
The sed command disables the #includedir directive that would allow any files in subdirectories to override these inline updates.
If SUDO or apt-get is not accessible inside the Container, You can use, below option in running container.
docker exec -u root -it f83b5c5bf413 ash
"f83b5c5bf413" is my container ID & here is working example from my terminal:
This may not work for all images, but some images contain a root user already, such as in the jupyterhub/singleuser image. With that image it's simply:
USER root
RUN sudo apt-get update
The main idea is that you need to create user that is a root user according to the container.
Main commands:
RUN echo "bot:bot" | chpasswd
RUN adduser bot sudo
the first sends the literal string bot:bot to chpasswd which creates the user bot with the password bot, chpasswd does:
The chpasswd command reads a list of user name and password pairs from standard input and uses this information to update a group of existing users. Each line is of the format:
user_name:password
By default the supplied password must be in clear-text, and is encrypted by chpasswd. Also the password age will be updated, if present.
The second command I assume adds the user bot as sudo.
Full docker container to play with:
FROM continuumio/miniconda3
# FROM --platform=linux/amd64 continuumio/miniconda3
MAINTAINER Brando Miranda "me#gmail.com"
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
ssh \
git \
m4 \
libgmp-dev \
opam \
wget \
ca-certificates \
rsync \
strace \
gcc \
rlwrap \
sudo
# https://github.com/giampaolo/psutil/pull/2103
RUN useradd -m bot
# format for chpasswd user_name:password
RUN echo "bot:bot" | chpasswd
RUN adduser bot sudo
WORKDIR /home/bot
USER bot
#CMD /bin/bash
If you have a container running as root that runs a script (which you can't change) that needs access to the sudo command, you can simply create a new sudo script in your $PATH that calls the passed command.
e.g. In your Dockerfile:
RUN if type sudo 2>/dev/null; then \
echo "The sudo command already exists... Skipping."; \
else \
echo -e "#!/bin/sh\n\${#}" > /usr/sbin/sudo; \
chmod +x /usr/sbin/sudo; \
fi
An example Dockerfile for Centos7. In this example we add prod_user with privilege of sudo.
FROM centos:7
RUN yum -y update && yum clean all
RUN yum -y install openssh-server python3 sudo
RUN adduser -m prod_user && \
echo "MyPass*49?" | passwd prod_user --stdin && \
usermod -aG wheel prod_user && \
mkdir /home/prod_user/.ssh && \
chown prod_user:prod_user -R /home/prod_user/ && \
chmod 700 /home/prod_user/.ssh
RUN echo "prod_user ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers && \
echo "%wheel ALL=(ALL) ALL" >> /etc/sudoers
RUN echo "PasswordAuthentication yes" >> /etc/ssh/sshd_config
RUN systemctl enable sshd.service
VOLUME [ "/sys/fs/cgroup" ]
ENTRYPOINT ["/usr/sbin/init"]
There is no answer on how to do this on CentOS.
On Centos, you can add following to Dockerfile
RUN echo "user ALL=(root) NOPASSWD:ALL" > /etc/sudoers.d/user && \
chmod 0440 /etc/sudoers.d/user
I'm using an Ubuntu image, while using the docker desktop had faced this issue.
The following resolved the issue:
apt-get update
apt-get install sudo

Start sshd automatically with docker container

Given:
container based on ubuntu:13.10
installed ssh (via apt-get install ssh)
Problem: each when I start container I have to run sshd manually service ssh start
Tried: update-rc.d ssh defaults, but it does not helps.
Question: how to setup container to start sshd service automatically during container start?
Just try:
ENTRYPOINT service ssh restart && bash
in your dockerfile, it works fun for me!
more details here: How to automatically start a service when running a docker container?
Here is a Dockerfile which installs ssh server and runs it:
# Build Ubuntu image with base functionality.
FROM ubuntu:focal AS ubuntu-base
ENV DEBIAN_FRONTEND noninteractive
SHELL ["/bin/bash", "-o", "pipefail", "-c"]
# Setup the default user.
RUN useradd -rm -d /home/ubuntu -s /bin/bash -g root -G sudo ubuntu
RUN echo 'ubuntu:ubuntu' | chpasswd
USER ubuntu
WORKDIR /home/ubuntu
# Build image with Python and SSHD.
FROM ubuntu-base AS ubuntu-with-sshd
USER root
# Install required tools.
RUN apt-get -qq update \
&& apt-get -qq --no-install-recommends install vim-tiny=2:8.1.* \
&& apt-get -qq --no-install-recommends install sudo=1.8.* \
&& apt-get -qq --no-install-recommends install python3-pip=20.0.* \
&& apt-get -qq --no-install-recommends install openssh-server=1:8.* \
&& apt-get -qq clean \
&& rm -rf /var/lib/apt/lists/*
# Configure SSHD.
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
RUN mkdir /var/run/sshd
RUN bash -c 'install -m755 <(printf "#!/bin/sh\nexit 0") /usr/sbin/policy-rc.d'
RUN ex +'%s/^#\zeListenAddress/\1/g' -scwq /etc/ssh/sshd_config
RUN ex +'%s/^#\zeHostKey .*ssh_host_.*_key/\1/g' -scwq /etc/ssh/sshd_config
RUN RUNLEVEL=1 dpkg-reconfigure openssh-server
RUN ssh-keygen -A -v
RUN update-rc.d ssh defaults
# Configure sudo.
RUN ex +"%s/^%sudo.*$/%sudo ALL=(ALL:ALL) NOPASSWD:ALL/g" -scwq! /etc/sudoers
# Generate and configure user keys.
USER ubuntu
RUN ssh-keygen -t ed25519 -f ~/.ssh/id_ed25519
#COPY --chown=ubuntu:root "./files/authorized_keys" /home/ubuntu/.ssh/authorized_keys
# Setup default command and/or parameters.
EXPOSE 22
CMD ["/usr/bin/sudo", "/usr/sbin/sshd", "-D", "-o", "ListenAddress=0.0.0.0"]
Build with the following command:
docker build --target ubuntu-with-sshd -t ubuntu-with-sshd .
Then run with:
docker run -p 2222:22 ubuntu-with-sshd
To connect to container via local port, run: ssh -v localhost -p 2222.
To check for container IP address, use docker ps and docker inspect.
Here is example of docker-compose.yml file:
---
version: '3.4'
services:
ubuntu-with-sshd:
image: "ubuntu-with-sshd:latest"
build:
context: "."
target: "ubuntu-with-sshd"
networks:
mynet:
ipv4_address: 172.16.128.2
ports:
- "2222:22"
privileged: true # Required for /usr/sbin/init
networks:
mynet:
ipam:
config:
- subnet: 172.16.128.0/24
To run, type:
docker-compose up --build
I think the correct way to do it would follow docker's instructions to dockerizing the ssh service.
And in correlation to the specific question, the following lines added at the end of the dockerfile will achieve what you were looking for:
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
Dockerize a SSHD service
I have created dockerfiler to run ssh inside. I think it is not secure, but for testing/development in DMZ it could be ok:
FROM ubuntu:20.04
USER root
# change root password to `ubuntu`
RUN echo 'root:ubuntu' | chpasswd
ENV DEBIAN_FRONTEND noninteractive
# install ssh server
RUN apt-get update && apt-get install -y \
openssh-server sudo \
&& rm -rf /var/lib/apt/lists/*
# workdir for ssh
RUN mkdir -p /run/sshd
# generate server keys
RUN ssh-keygen -A
# allow root to login
RUN sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/g' /etc/ssh/sshd_config
EXPOSE 22
# run ssh server
CMD ["/usr/sbin/sshd", "-D", "-o", "ListenAddress=0.0.0.0"]
You can start ssh server when starting your container probably. Something like this:
docker run ubuntu /usr/sbin/sshd -D
Check out this official tutorial.
This is what I did:
FROM nginx
# install gosu
# seealso:
# https://docs.docker.com/develop/develop-images/dockerfile_best-practices/
# https://github.com/tianon/gosu/blob/master/INSTALL.md
# https://github.com/tianon/gosu
RUN set -eux; \
apt-get update; \
apt-get install -y gosu; \
rm -rf /var/lib/apt/lists/*; \
# verify that the binary works
gosu nobody true
ENV myenv='default'
RUN apt-get update && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
COPY entrypoint.sh /entrypoint.sh
ENV AIRFLOW_HOME=/usr/local/airflow
RUN mkdir $AIRFLOW_HOME
RUN groupadd --gid 8080 airflow
RUN useradd --uid 8080 --gid 8080 -ms /bin/bash -d $AIRFLOW_HOME airflow
RUN echo 'airflow:mypass' | chpasswd
EXPOSE 22
CMD ["/entrypoint.sh"]
Inside entrypoint.sh:
echo "starting ssh as root"
gosu root service ssh start &
#gosu root /usr/sbin/sshd -D &
echo "starting tail user"
exec gosu airflow tail -f /dev/null
Well, I used the following command to solve that
docker run -i -t mycentos6 /bin/bash -c '/etc/init.d/sshd start && /bin/bash'
First login to your container and write an initialization script /bin/init as following:
# execute in the container
cat <<EOT >> /bin/init
#!/bin/bash
service ssh start
while true; do sleep 1; done
EOT
Then make the root user is permitted to logging via ssh:
# execute in the container
echo "PermitRootLogin yes" >> /etc/ssh/sshd_config
Commit the container to a new image after exiting from the container:
# execute in the server
docker commit <YOUR_CONTAINER> <ANY_REPO>:<ANY_TAG>
From now on, as long as you run your container with the following command, the ssh service will be automatically started.
# execute in the server
docker run -it -d --name <NAME> <REPO>:<TAG> /bin/init
docker exec -it <NAME> /bin/bash
Done.
You can try a more elegant way to do that with phusion/baseimage-docker
https://github.com/phusion/baseimage-docker#readme

Resources