docker image - centos 7 > ssh service not found - docker

I installed docker image - centos 7 on my ubuntu machine. But ssh service not found. so I cant run this service.
[root#990e92224a82 /]# yum install openssh-server openssh-clients
Loaded plugins: fastestmirror, ovl
Loading mirror speeds from cached hostfile
* base: mirror.dhakacom.com
* extras: mirror.dhakacom.com
* updates: mirror.dhakacom.com
Package openssh-server-6.6.1p1-31.el7.x86_64 already installed and latest version
Package openssh-clients-6.6.1p1-31.el7.x86_64 already installed and latest version
Nothing to do
[root#990e92224a82 /]# ss
ssh ssh-agent ssh-keygen sshd ssltap
ssh-add ssh-copy-id ssh-keyscan sshd-keygen
How can I remotely login docker image?

You have to do the following instructions on Dockerfile.
RUN yum install -y sudo wget telnet openssh-server vim git ncurses-term
RUN useradd your_account
RUN mkdir -p /home/your_account/.ssh && chown -R your_account /home/your_account/.ssh/
# Create known_hosts
RUN touch /home/your_account/.ssh/known_hosts
COPY files/authorized_keys /home/your_account/.ssh/
COPY files/config /home/your_account/.ssh/
COPY files/pam.d/sshd /etc/pam.d/sshd
RUN touch /home/your_account/.ssh/environment
RUN chown -R your_account /home/your_account/.ssh
RUN chmod 400 -R /home/your_account/.ssh/*
RUN chmod 700 -R /home/your_account/.ssh/known_hosts
RUN chmod 700 /home/your_account/.ssh/environment
# Enable sshd
COPY files/sshd_config /etc/ssh/
RUN ssh-keygen -f /etc/ssh/ssh_host_rsa_key -N '' -t rsa
# Add a account into sudoers and this account doesn't need to type his password
COPY files/sudoers /etc/
COPY files/start.sh /root/
I have to remove "pam_nologin.so" on the file /etc/pam.d/sshd, because when I upgrade the openssh-server's version to openssh-server-6.6.1p1-31.el7, the pam_nologin.so will disallow remote login for any users even the file /etc/nologin is not exist.
start.sh
#!/bin/bash
/usr/sbin/sshd -E /tmp/sshd.log
Start centos container
docker run -d -t -p $(sshPort):22 --name $(containerName) $(imageName) /bin/bash
docker exec -d $(containerName) bash -c "sh /root/start.sh"
Login container
ssh $(Docker ip) $(sshPort)

In extend to #puritys
You could do this in the Dockerfile instead
Last in the file:
ENTRYPOINT /usr/sbin/sshd -E /tmp/sshd.log && /bin/bash
Then you will only need to run:
docker run -d -p -t $(sshPort):22 --name $(containerName) $(imageName) /bin/bash

Related

Copy a file from local to docker container via a shell script

I have the following folder structure
db
- build.sh
- Dockerfile
- file.txt
build.sh
PGUID=$(id -u postgres)
PGGID=$(id -g postgres)
CS=$(lsb_release -cs)
docker build --build-arg POSTGRES_UID=${PGUID} --build-arg POSTGRES_GID=${PGGID} --build-arg LSB_CS=${CS} -t postgres:1.0 .
docker run -d postgres:1.0 sh -c "cp file.txt ./file.txt"
Dockerfile
FROM ubuntu:19.10
RUN apt-get update
ARG LSB_CS=$LSB_CS
RUN echo "lsb_release: ${LSB_CS}"
RUN apt-get install -y sudo \
&& sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt eoan-pgdg main" > /etc/apt/sources.list.d/pgdg.list'
RUN apt-get install -y wget \
&& apt-get install -y gnupg \
&& wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | \
sudo apt-key add -
RUN apt-get update
RUN apt-get install tzdata -y
ARG POSTGRES_GID=128
RUN groupadd -g $POSTGRES_GID postgres
ARG POSTGRES_UID=122
RUN useradd -r -g postgres -u $POSTGRES_UID postgres
RUN apt-get update && apt-get install -y postgresql-10
RUN locale-gen en_US.UTF-8
RUN echo "host all all 0.0.0.0/0 md5" >> /etc/postgresql/10/main/pg_hba.conf
RUN echo "listen_addresses='*'" >> /etc/postgresql/10/main/postgresql.conf
EXPOSE 5432
CMD ["pg_ctlcluster", "--foreground", "10", "main", "start"]
file.txt
"Hello Hello"
Basically i want to be able to build my image, start my container and copy file.txt in my local to the docker container.
I tried doing it like this docker run -d postgres:1.0 sh -c "cp file.txt ./file.txt" but it doesn't work. I have also tried other options as well but also not working.
At the moment when i run my script sh build.sh, it runs everything and even starts a container but doesn't copy over that file to the container.
Any help on this is appreciated
Sounds like what you want is a mounting the file into a location of your docker container.
You can mount a local directory into your container and access it from the inside:
mkdir /some/dirname
copy filet.txt /some/dirname/
# run as demon, mount /some/dirname to /directory/in/container, run sh
docker run -d -v /some/dirname:/directory/in/container postgres:1.0 sh
Minimal working example:
On windows host:
d:\>mkdir d:\temp
d:\>mkdir d:\temp\docker
d:\>mkdir d:\temp\docker\dir
d:\>echo "SomeDataInFile" > d:\temp\docker\dir\file.txt
# mount one local file to root in docker container, renaming it in the process
d:\>docker run -it -v d:\temp\docker\dir\file.txt:/other_file.txt alpine
In docker container:
/ # ls
bin etc lib mnt other_file.txt root sbin sys usr
dev home media opt proc run srv tmp var
/ # cat other_file.txt
"SomeDataInFile"
/ # echo 32 >> other_file.txt
/ # cat other_file.txt
"SomeDataInFile"
32
/ # exit
this will mount the (outside) directory/file as folder/file inside your container. If you specify a directory/file inside your docker that already exists it will be shadowed.
Back on windows host:
d:\>more d:\temp\docker\dir\file.txt
"SomeDataInFile"
32
See f.e Docker volumes vs mount bind - Use cases on Serverfault for more info about ways to bind mount or use volumes.

Unknown Instruction : Sudo , when i try to build the docker image

When I try to build the below docker file , i get the error "Error response from daemon: Dockerfile parse error line 12: unknown instruction: SUDO"
FROM jenkins
USER root
RUN apt-get -qqy update; apt-get install -qqy sudo
RUN echo "jenkins ALL=NOPASSWD: ALL" >> /etc/sudoers
RUN wget http://get.docker.com/builds/Linux/x86_64/docker-latest.tgz
RUN tar -xvzf docker-latest.tgz
RUN mv docker/* /usr/bin/
USER jenkins
RUN /usr/local/bin/install-plugins.sh junit git git-client ssh-slaves greenballs chucknorris ws-cleanup
sudo mkdir -p /var/jenkins_home
cd /var/jenkins_home
sudo chown -R 1000 /var/jenkins_home
Below commands doesn't belong to Dockerfile syntax
sudo mkdir -p /var/jenkins_home
cd /var/jenkins_home
sudo chown -R 1000 /var/jenkins_home
Add the RUN infront of them if you wants to run them. But the good practice is to mount folder from local to container. If you are tying to map the jenkins home folder, then create /var/jenkins_home folder on local system & then mount to docker container with -v option.
You can follow given link for using docker in dockerized jenkins: https://medium.com/#manav503/how-to-build-docker-images-inside-a-jenkins-container-d59944102f30

docker container can't use `service sshd restart`

I am trying to build a hadoop Dockerfile.
In the build process, I added:
&& apt install -y openssh-client \
&& apt install -y openssh-server \
&& ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa \
&& cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys \
&& chmod 0600 ~/.ssh/authorized_keys
&& sed -i '/\#AuthorizedKeysFile/ d' /etc/ssh/sshd_config \
&& echo "AuthorizedKeysFile ~/.ssh/authorized_keys" >> /etc/ssh/sshd_config \
&& /etc/init.d/ssh restart
I assumed that when I ran this container:
docker run -it --rm hadoop/tag bash
I would be able to:
ssh localhost
But I got an error:
ssh: connect to host localhost port 22: Connection refused
If I run this manually inside the container:
/etc/init.d/ssh restart
# or this
service ssh restart
Then I can get connected. I am thinking that this means the sshd restart didn't work.
I am using FROM java in the Dockerfile.
The build process only builds an image. Processes that are run at that time (using RUN) are no longer running after the build, and are not started again when a container is launched using the image.
What you need to do is get sshd to start at container runtime. The simplest way to do that is using an entrypoint script.
Dockerfile:
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
CMD ["whatever", "your", "command", "is"]
entrypoint.sh:
#!/bin/sh
# Start the ssh server
/etc/init.d/ssh restart
# Execute the CMD
exec "$#"
Rebuild the image using the above, and when you use it to start a container, it should start sshd before running your CMD.
You can also change the base image you start from to something like Phusion baseimage if you prefer. It makes it easy to start some services like syslogd, sshd, that you may wish the container to have running.

Issue getting memcache container to automatically start in Docker

I have a container that is being built that only really contains memcached, and I want it to start once the container is built.
This is my current Docker file -
FROM centos:7
MAINTAINER Some guy <someguy#guysome.org>
RUN yum update -y
RUN yum install -y git https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
RUN yum install -y ansible && yum clean all -y
RUN yum install -y memcached
RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
VOLUME [ "/sys/fs/cgroup" ]
EXPOSE 11211/tcp 11211/udp
CMD ["/usr/bin/memcached"]
#CMD ["/usr/bin/memcached -u root"]
#CMD ["/usr/bin/memcached", "-D", "FOREGROUND"]
The container builds successfully, but when I try to run the container using the command
docker run -d -i -t -P <image id>, I cannot see the image inside of the list that is returned with docker ps.
I attempted to have my memcached service run the same way as my httpd container, but I cannot pass in the argument using the -D flag (since its already a daemon im guessing). This is how my httpd CMD was set up -
CMD ["/usr/sbin/httpd", "-D", "FOREGROUND"]
Locally, if I run the command /usr/bin/memcached -u root it runs as a process, but when I try in the container CMD it informs me that it cannot find the specified file (having to do with the -u root section I am guessing).
Setting the CMD to /bin/bash still did not allow the service to start either.
How can I have my memcached service run and allow it to be seen when I run docker ps, so that I can open a bash section inside of it?
Thanks.
memcached will run in the foreground by default, which is what you want. The -d option would run memcached as a daemon which would cause the container to exit immediately.
The Dockerfile looks overly complex, try this
FROM centos:7
RUN yum update -y && yum install -y epel-release && yum install -y memcached && yum clean all
EXPOSE 11211
CMD ["/usr/bin/memcached","-p","11211","-u","memcached","-m","64"]
Then you can do what you need
$ docker build -t me/memcached .
<snipped build>
$ CID=$(docker create me/memcached)
$ docker start $CID
4ac5afed0641f07f4694c30476cef41104f6fd864c174958b971822005fd292a
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4ac5afed0641 me/memcached "/usr/bin/memcached -" About a minute ago Up 4 seconds 11211/tcp jovial_bardeen
$ docker exec $CID ps -ef
UID PID PPID C STIME TTY TIME CMD
memcach+ 1 0 0 01:03 ? 00:00:00 /usr/bin/memcached -p 11211 -u memcached -m 64
root 10 0 2 01:04 ? 00:00:00 ps -ef
$ docker exec -ti $CID bash
[root#4ac5afed0641 /]#
Or skip your Dockerfile if it actually only runs memcached and use:
docker run --name my-memcache -d memcached
At least to get your basic set-up going, and then you can update that official image as needed.

Start sshd automatically with docker container

Given:
container based on ubuntu:13.10
installed ssh (via apt-get install ssh)
Problem: each when I start container I have to run sshd manually service ssh start
Tried: update-rc.d ssh defaults, but it does not helps.
Question: how to setup container to start sshd service automatically during container start?
Just try:
ENTRYPOINT service ssh restart && bash
in your dockerfile, it works fun for me!
more details here: How to automatically start a service when running a docker container?
Here is a Dockerfile which installs ssh server and runs it:
# Build Ubuntu image with base functionality.
FROM ubuntu:focal AS ubuntu-base
ENV DEBIAN_FRONTEND noninteractive
SHELL ["/bin/bash", "-o", "pipefail", "-c"]
# Setup the default user.
RUN useradd -rm -d /home/ubuntu -s /bin/bash -g root -G sudo ubuntu
RUN echo 'ubuntu:ubuntu' | chpasswd
USER ubuntu
WORKDIR /home/ubuntu
# Build image with Python and SSHD.
FROM ubuntu-base AS ubuntu-with-sshd
USER root
# Install required tools.
RUN apt-get -qq update \
&& apt-get -qq --no-install-recommends install vim-tiny=2:8.1.* \
&& apt-get -qq --no-install-recommends install sudo=1.8.* \
&& apt-get -qq --no-install-recommends install python3-pip=20.0.* \
&& apt-get -qq --no-install-recommends install openssh-server=1:8.* \
&& apt-get -qq clean \
&& rm -rf /var/lib/apt/lists/*
# Configure SSHD.
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
RUN mkdir /var/run/sshd
RUN bash -c 'install -m755 <(printf "#!/bin/sh\nexit 0") /usr/sbin/policy-rc.d'
RUN ex +'%s/^#\zeListenAddress/\1/g' -scwq /etc/ssh/sshd_config
RUN ex +'%s/^#\zeHostKey .*ssh_host_.*_key/\1/g' -scwq /etc/ssh/sshd_config
RUN RUNLEVEL=1 dpkg-reconfigure openssh-server
RUN ssh-keygen -A -v
RUN update-rc.d ssh defaults
# Configure sudo.
RUN ex +"%s/^%sudo.*$/%sudo ALL=(ALL:ALL) NOPASSWD:ALL/g" -scwq! /etc/sudoers
# Generate and configure user keys.
USER ubuntu
RUN ssh-keygen -t ed25519 -f ~/.ssh/id_ed25519
#COPY --chown=ubuntu:root "./files/authorized_keys" /home/ubuntu/.ssh/authorized_keys
# Setup default command and/or parameters.
EXPOSE 22
CMD ["/usr/bin/sudo", "/usr/sbin/sshd", "-D", "-o", "ListenAddress=0.0.0.0"]
Build with the following command:
docker build --target ubuntu-with-sshd -t ubuntu-with-sshd .
Then run with:
docker run -p 2222:22 ubuntu-with-sshd
To connect to container via local port, run: ssh -v localhost -p 2222.
To check for container IP address, use docker ps and docker inspect.
Here is example of docker-compose.yml file:
---
version: '3.4'
services:
ubuntu-with-sshd:
image: "ubuntu-with-sshd:latest"
build:
context: "."
target: "ubuntu-with-sshd"
networks:
mynet:
ipv4_address: 172.16.128.2
ports:
- "2222:22"
privileged: true # Required for /usr/sbin/init
networks:
mynet:
ipam:
config:
- subnet: 172.16.128.0/24
To run, type:
docker-compose up --build
I think the correct way to do it would follow docker's instructions to dockerizing the ssh service.
And in correlation to the specific question, the following lines added at the end of the dockerfile will achieve what you were looking for:
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
Dockerize a SSHD service
I have created dockerfiler to run ssh inside. I think it is not secure, but for testing/development in DMZ it could be ok:
FROM ubuntu:20.04
USER root
# change root password to `ubuntu`
RUN echo 'root:ubuntu' | chpasswd
ENV DEBIAN_FRONTEND noninteractive
# install ssh server
RUN apt-get update && apt-get install -y \
openssh-server sudo \
&& rm -rf /var/lib/apt/lists/*
# workdir for ssh
RUN mkdir -p /run/sshd
# generate server keys
RUN ssh-keygen -A
# allow root to login
RUN sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/g' /etc/ssh/sshd_config
EXPOSE 22
# run ssh server
CMD ["/usr/sbin/sshd", "-D", "-o", "ListenAddress=0.0.0.0"]
You can start ssh server when starting your container probably. Something like this:
docker run ubuntu /usr/sbin/sshd -D
Check out this official tutorial.
This is what I did:
FROM nginx
# install gosu
# seealso:
# https://docs.docker.com/develop/develop-images/dockerfile_best-practices/
# https://github.com/tianon/gosu/blob/master/INSTALL.md
# https://github.com/tianon/gosu
RUN set -eux; \
apt-get update; \
apt-get install -y gosu; \
rm -rf /var/lib/apt/lists/*; \
# verify that the binary works
gosu nobody true
ENV myenv='default'
RUN apt-get update && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
COPY entrypoint.sh /entrypoint.sh
ENV AIRFLOW_HOME=/usr/local/airflow
RUN mkdir $AIRFLOW_HOME
RUN groupadd --gid 8080 airflow
RUN useradd --uid 8080 --gid 8080 -ms /bin/bash -d $AIRFLOW_HOME airflow
RUN echo 'airflow:mypass' | chpasswd
EXPOSE 22
CMD ["/entrypoint.sh"]
Inside entrypoint.sh:
echo "starting ssh as root"
gosu root service ssh start &
#gosu root /usr/sbin/sshd -D &
echo "starting tail user"
exec gosu airflow tail -f /dev/null
Well, I used the following command to solve that
docker run -i -t mycentos6 /bin/bash -c '/etc/init.d/sshd start && /bin/bash'
First login to your container and write an initialization script /bin/init as following:
# execute in the container
cat <<EOT >> /bin/init
#!/bin/bash
service ssh start
while true; do sleep 1; done
EOT
Then make the root user is permitted to logging via ssh:
# execute in the container
echo "PermitRootLogin yes" >> /etc/ssh/sshd_config
Commit the container to a new image after exiting from the container:
# execute in the server
docker commit <YOUR_CONTAINER> <ANY_REPO>:<ANY_TAG>
From now on, as long as you run your container with the following command, the ssh service will be automatically started.
# execute in the server
docker run -it -d --name <NAME> <REPO>:<TAG> /bin/init
docker exec -it <NAME> /bin/bash
Done.
You can try a more elegant way to do that with phusion/baseimage-docker
https://github.com/phusion/baseimage-docker#readme

Resources