permission denied docker localhost - docker

i just started to learn docker...
and i faced this issue, of building image from dockerfile, run a container and trying to access to it!
so when i try to login localhost via ssh -p 12000 root#localhost,
it keeps saying permission denied even when i put abcd for password
FROM ubuntu:20.04
RUN apt update && apt -y upgrade
RUN apt install -y openssh-server
RUN apt-get install -y gcc
RUN mkdir /var/run/sshd
RUN echo 'root:abcd' | chpasswd
RUN sed -i 's/#*PermitRootLogin prohibit-password/PermitRootLogin yes/g' /etc/ssh/sshd_config
RUN sed -i 's#session\s*s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' /etc/pam.d/sshd
ENV NOTVISIBLE="in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
COPY hw.c /root
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
WORKDIR /root
RUN gcc -o root hw.c

The best way to ssh to a container is by running this commands (this is for your ubuntu container)
docker exec -ti <container_id> bash
the container_id you can get it running docker ps if you didn't setup a fix name
Then you can remove all this lines
RUN mkdir /var/run/sshd
RUN echo 'root:abcd' | chpasswd
RUN sed -i 's/#*PermitRootLogin prohibit-password/PermitRootLogin yes/g' /etc/ssh/sshd_config
RUN sed -i 's#session\s*s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' /etc/pam.d/sshd
ENV NOTVISIBLE="in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
COPY hw.c /root
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
Remember also that everything you do by ssh on the container will be lost after the container is killed, so always better to add everything on the Dockerfile

i fixed it by deleting all remaining containers!

Related

crontab non executed in docker

I need to execute crontab inside docker container, so I created the following dockerfile:
FROM openjdk:11-oraclelinux8
RUN mkdir -p /opt/my-user/
RUN mkdir -p /opt/my-user/joblogs
RUN groupadd my-user && adduser my-user -g my-user
RUN chown -R my-user:my-user /opt/my-user/
RUN microdnf install yum
RUN yum -y update
RUN yum -y install cronie
RUN yum -y install vi
RUN yum -y install telnet
COPY talend /opt/my-user/
COPY entrypoint.sh /opt/my-user/
RUN chmod +x /opt/my-user/entrypoint.sh
RUN chmod +x /opt/my-user/ETLJob/ETLJob_run.sh
RUN chown -R my-user:my-user /opt/my-user/
RUN echo "*/2 * * * * /bin/sh /opt/my-user/ETLJob/ETLJob_run.sh >> /opt/my-user/joblogs/job.log 2>&1" >> /etc/cron.d/my-user-job
RUN chmod 0644 /etc/cron.d/my-user-job
RUN crontab -u my-user /etc/cron.d/my-user-job
RUN chmod u+s /usr/sbin/crond
USER my-user:my-user
ENTRYPOINT [ "/opt/my-user/entrypoint.sh" ]
My entrypoint.sh file is the following one:
#!/bin/bash
echo "Start cron"
crontab /etc/cron.d/diomedee-job
echo "cron started"
# Run forever
tail -f /dev/null
So far so good, the container is created successfully and when I go inside the container and I type crontab -l I see the crontab... but it is never executed
I can't figure out what I'm missing; any research I made didn't give me any clue
May you give me any tip?
Docker containers usually only host a single process. In your case, the tail process. The cron daemon isn't running.
Your comment 'cron started' seems to indicate that running crontab starts the daemon, but it doesn't.
Replace your tail -f /dev/null command with crond -f to run the cron daemon in the foreground and it should work.

Can't ssh to docker(ubuntu:19.04) because of happen permission denied

I made ssh service docker from this Dockerfile.
FROM ubuntu:19.04
RUN apt-get update && apt-get install -y openssh-server \
postgresql-client \
language-pack-ja
RUN update-locale LANG=ja_JP.UTF-8
RUN mkdir /var/run/sshd
ARG ROOT_PASSWORD
RUN echo root:${ROOT_PASSWORD} | chpasswd
RUN sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
I followed this page.
https://docs.docker.com/engine/examples/running_ssh_service/
It is differently that only changed ubuntu image version to 19.04.
However, I couldn't ssh as happened permission denid.
docker build --build-arg ROOT_PASSWORD=$ROOT_PASSWORD -t eg_sshd .
docker run -d -P --name test_sshd eg_sshd
docker port test_sshd 22
0.0.0.0:32770
ssh root#localhost -p 32770
root#localhost's password:
Permission denied, please try again.
Why did It happen permission denied?
The PermitRootLogin line was not comment out when it was 16.04, however comment out when it was 18.04, so I set it to #\? in order to accommodate both.
It could execut from following Dockerfile.
FROM ubuntu:19.10
RUN apt-get update && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN echo 'root:root' | chpasswd
RUN sed -i 's/#\?PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]

Dockerfile, groupadd with GUI parameter

I am trying to add group with id declared in dockerfile, however I always get error:
groups: cannot find name for group ID 1001
My dockerfile:
FROM python:3.7.1
ARG UID=1001
RUN apt-get update && apt-get install -y openssh-server sudo
RUN mkdir /var/run/sshd
RUN echo 'root:pycharm' | chpasswd
RUN sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
RUN groupadd -r charm -g 1001
RUN useradd -ms /bin/bash charm -g charm -u 1001
RUN echo "export VISIBLE=now" >> /etc/profile
ADD helpers /opt/.pycharm_helpers
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
Error message I get when I try to enter container with:
docker exec -ti -u 1001 pydebug1 bash
Group doesn't exists in /etc/group file. When I run commands inside container then works but I want to have them inside Dockerfile.
Saved your dockerfile content in Dockerfile and touched a file with name helpers.
Then build the image and enter into the image. Screen shot shows 2nd build, commit hash etc and id from the image. You can see all the commands in screen shot.
You may build as docker build -t pydebug1 . as well.

Using sudo inside non-priviledged docker container not working

I don't want to be root inside a docker container.
But I have to modify some files which belong to root in a script.
I want to use sudo for this.
This is my docker file:
FROM ubuntu:16.04
RUN apt-get update
RUN apt-get install -y curl wget python openssh-server sudo
RUN mkdir /grader
RUN mkdir /grader/week1
RUN mkdir /grader/week1/assignment2
ADD executeGrader.sh /grader/
RUN groupadd -g 1000 coursera
RUN useradd -g 1000 -u 1000 --shell /bin/bash coursera
RUN usermod -a -G sudo coursera
RUN mkdir /home/coursera
RUN chown coursera:coursera /home/coursera
RUN echo "StrictHostKeyChecking no" >> /etc/ssh/ssh_config
RUN echo "coursera ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
RUN chmod 777 /etc/hostname
USER coursera
EXPOSE 8080
EXPOSE 8081
ENTRYPOINT ["/grader/executeGrader.sh"]
executeGrader.sh contains this one:
#!/bin/bash
id
sudo -u root -H bash -c "hostname localhost"
But I get this one :/
>>docker run -h sdfsdfsdf323 -u 1000:1000 -P stackoverflow
uid=1000(coursera) gid=1000(coursera) groups=1000(coursera)
hostname: you must be root to change the host name
Any ideas?
Thanks for all your support, this one was finally working for me:
export temphostname=`hostname`
sudo su -c "echo 127.0.0.1 $temphostname >> /etc/hosts"

Start sshd automatically with docker container

Given:
container based on ubuntu:13.10
installed ssh (via apt-get install ssh)
Problem: each when I start container I have to run sshd manually service ssh start
Tried: update-rc.d ssh defaults, but it does not helps.
Question: how to setup container to start sshd service automatically during container start?
Just try:
ENTRYPOINT service ssh restart && bash
in your dockerfile, it works fun for me!
more details here: How to automatically start a service when running a docker container?
Here is a Dockerfile which installs ssh server and runs it:
# Build Ubuntu image with base functionality.
FROM ubuntu:focal AS ubuntu-base
ENV DEBIAN_FRONTEND noninteractive
SHELL ["/bin/bash", "-o", "pipefail", "-c"]
# Setup the default user.
RUN useradd -rm -d /home/ubuntu -s /bin/bash -g root -G sudo ubuntu
RUN echo 'ubuntu:ubuntu' | chpasswd
USER ubuntu
WORKDIR /home/ubuntu
# Build image with Python and SSHD.
FROM ubuntu-base AS ubuntu-with-sshd
USER root
# Install required tools.
RUN apt-get -qq update \
&& apt-get -qq --no-install-recommends install vim-tiny=2:8.1.* \
&& apt-get -qq --no-install-recommends install sudo=1.8.* \
&& apt-get -qq --no-install-recommends install python3-pip=20.0.* \
&& apt-get -qq --no-install-recommends install openssh-server=1:8.* \
&& apt-get -qq clean \
&& rm -rf /var/lib/apt/lists/*
# Configure SSHD.
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
RUN mkdir /var/run/sshd
RUN bash -c 'install -m755 <(printf "#!/bin/sh\nexit 0") /usr/sbin/policy-rc.d'
RUN ex +'%s/^#\zeListenAddress/\1/g' -scwq /etc/ssh/sshd_config
RUN ex +'%s/^#\zeHostKey .*ssh_host_.*_key/\1/g' -scwq /etc/ssh/sshd_config
RUN RUNLEVEL=1 dpkg-reconfigure openssh-server
RUN ssh-keygen -A -v
RUN update-rc.d ssh defaults
# Configure sudo.
RUN ex +"%s/^%sudo.*$/%sudo ALL=(ALL:ALL) NOPASSWD:ALL/g" -scwq! /etc/sudoers
# Generate and configure user keys.
USER ubuntu
RUN ssh-keygen -t ed25519 -f ~/.ssh/id_ed25519
#COPY --chown=ubuntu:root "./files/authorized_keys" /home/ubuntu/.ssh/authorized_keys
# Setup default command and/or parameters.
EXPOSE 22
CMD ["/usr/bin/sudo", "/usr/sbin/sshd", "-D", "-o", "ListenAddress=0.0.0.0"]
Build with the following command:
docker build --target ubuntu-with-sshd -t ubuntu-with-sshd .
Then run with:
docker run -p 2222:22 ubuntu-with-sshd
To connect to container via local port, run: ssh -v localhost -p 2222.
To check for container IP address, use docker ps and docker inspect.
Here is example of docker-compose.yml file:
---
version: '3.4'
services:
ubuntu-with-sshd:
image: "ubuntu-with-sshd:latest"
build:
context: "."
target: "ubuntu-with-sshd"
networks:
mynet:
ipv4_address: 172.16.128.2
ports:
- "2222:22"
privileged: true # Required for /usr/sbin/init
networks:
mynet:
ipam:
config:
- subnet: 172.16.128.0/24
To run, type:
docker-compose up --build
I think the correct way to do it would follow docker's instructions to dockerizing the ssh service.
And in correlation to the specific question, the following lines added at the end of the dockerfile will achieve what you were looking for:
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
Dockerize a SSHD service
I have created dockerfiler to run ssh inside. I think it is not secure, but for testing/development in DMZ it could be ok:
FROM ubuntu:20.04
USER root
# change root password to `ubuntu`
RUN echo 'root:ubuntu' | chpasswd
ENV DEBIAN_FRONTEND noninteractive
# install ssh server
RUN apt-get update && apt-get install -y \
openssh-server sudo \
&& rm -rf /var/lib/apt/lists/*
# workdir for ssh
RUN mkdir -p /run/sshd
# generate server keys
RUN ssh-keygen -A
# allow root to login
RUN sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/g' /etc/ssh/sshd_config
EXPOSE 22
# run ssh server
CMD ["/usr/sbin/sshd", "-D", "-o", "ListenAddress=0.0.0.0"]
You can start ssh server when starting your container probably. Something like this:
docker run ubuntu /usr/sbin/sshd -D
Check out this official tutorial.
This is what I did:
FROM nginx
# install gosu
# seealso:
# https://docs.docker.com/develop/develop-images/dockerfile_best-practices/
# https://github.com/tianon/gosu/blob/master/INSTALL.md
# https://github.com/tianon/gosu
RUN set -eux; \
apt-get update; \
apt-get install -y gosu; \
rm -rf /var/lib/apt/lists/*; \
# verify that the binary works
gosu nobody true
ENV myenv='default'
RUN apt-get update && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
COPY entrypoint.sh /entrypoint.sh
ENV AIRFLOW_HOME=/usr/local/airflow
RUN mkdir $AIRFLOW_HOME
RUN groupadd --gid 8080 airflow
RUN useradd --uid 8080 --gid 8080 -ms /bin/bash -d $AIRFLOW_HOME airflow
RUN echo 'airflow:mypass' | chpasswd
EXPOSE 22
CMD ["/entrypoint.sh"]
Inside entrypoint.sh:
echo "starting ssh as root"
gosu root service ssh start &
#gosu root /usr/sbin/sshd -D &
echo "starting tail user"
exec gosu airflow tail -f /dev/null
Well, I used the following command to solve that
docker run -i -t mycentos6 /bin/bash -c '/etc/init.d/sshd start && /bin/bash'
First login to your container and write an initialization script /bin/init as following:
# execute in the container
cat <<EOT >> /bin/init
#!/bin/bash
service ssh start
while true; do sleep 1; done
EOT
Then make the root user is permitted to logging via ssh:
# execute in the container
echo "PermitRootLogin yes" >> /etc/ssh/sshd_config
Commit the container to a new image after exiting from the container:
# execute in the server
docker commit <YOUR_CONTAINER> <ANY_REPO>:<ANY_TAG>
From now on, as long as you run your container with the following command, the ssh service will be automatically started.
# execute in the server
docker run -it -d --name <NAME> <REPO>:<TAG> /bin/init
docker exec -it <NAME> /bin/bash
Done.
You can try a more elegant way to do that with phusion/baseimage-docker
https://github.com/phusion/baseimage-docker#readme

Resources