Copy a file from local to docker container via a shell script - docker

I have the following folder structure
db
- build.sh
- Dockerfile
- file.txt
build.sh
PGUID=$(id -u postgres)
PGGID=$(id -g postgres)
CS=$(lsb_release -cs)
docker build --build-arg POSTGRES_UID=${PGUID} --build-arg POSTGRES_GID=${PGGID} --build-arg LSB_CS=${CS} -t postgres:1.0 .
docker run -d postgres:1.0 sh -c "cp file.txt ./file.txt"
Dockerfile
FROM ubuntu:19.10
RUN apt-get update
ARG LSB_CS=$LSB_CS
RUN echo "lsb_release: ${LSB_CS}"
RUN apt-get install -y sudo \
&& sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt eoan-pgdg main" > /etc/apt/sources.list.d/pgdg.list'
RUN apt-get install -y wget \
&& apt-get install -y gnupg \
&& wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | \
sudo apt-key add -
RUN apt-get update
RUN apt-get install tzdata -y
ARG POSTGRES_GID=128
RUN groupadd -g $POSTGRES_GID postgres
ARG POSTGRES_UID=122
RUN useradd -r -g postgres -u $POSTGRES_UID postgres
RUN apt-get update && apt-get install -y postgresql-10
RUN locale-gen en_US.UTF-8
RUN echo "host all all 0.0.0.0/0 md5" >> /etc/postgresql/10/main/pg_hba.conf
RUN echo "listen_addresses='*'" >> /etc/postgresql/10/main/postgresql.conf
EXPOSE 5432
CMD ["pg_ctlcluster", "--foreground", "10", "main", "start"]
file.txt
"Hello Hello"
Basically i want to be able to build my image, start my container and copy file.txt in my local to the docker container.
I tried doing it like this docker run -d postgres:1.0 sh -c "cp file.txt ./file.txt" but it doesn't work. I have also tried other options as well but also not working.
At the moment when i run my script sh build.sh, it runs everything and even starts a container but doesn't copy over that file to the container.
Any help on this is appreciated

Sounds like what you want is a mounting the file into a location of your docker container.
You can mount a local directory into your container and access it from the inside:
mkdir /some/dirname
copy filet.txt /some/dirname/
# run as demon, mount /some/dirname to /directory/in/container, run sh
docker run -d -v /some/dirname:/directory/in/container postgres:1.0 sh
Minimal working example:
On windows host:
d:\>mkdir d:\temp
d:\>mkdir d:\temp\docker
d:\>mkdir d:\temp\docker\dir
d:\>echo "SomeDataInFile" > d:\temp\docker\dir\file.txt
# mount one local file to root in docker container, renaming it in the process
d:\>docker run -it -v d:\temp\docker\dir\file.txt:/other_file.txt alpine
In docker container:
/ # ls
bin etc lib mnt other_file.txt root sbin sys usr
dev home media opt proc run srv tmp var
/ # cat other_file.txt
"SomeDataInFile"
/ # echo 32 >> other_file.txt
/ # cat other_file.txt
"SomeDataInFile"
32
/ # exit
this will mount the (outside) directory/file as folder/file inside your container. If you specify a directory/file inside your docker that already exists it will be shadowed.
Back on windows host:
d:\>more d:\temp\docker\dir\file.txt
"SomeDataInFile"
32
See f.e Docker volumes vs mount bind - Use cases on Serverfault for more info about ways to bind mount or use volumes.

Related

Run Python scripts on command line running Docker images

I built a docker image using Dockerfile with Python and some libraries inside (no my project code inside). In my local work dir, there are some scripts to be run on the docker. So, here what I did
$ cd /path/to/my_workdir
$ docker run -it --name test -v `pwd`:`pwd` -w `pwd` my/code:test python src/main.py --config=test --results-dir=/home/me/Results
The command python src/main.py --config=test --results-dir=/home/me/Results is what I want to run inside the Docker container.
However, it returns,
/home/docker/miniconda3/bin/python: /home/docker/miniconda3/bin/python: cannot execute binary file
How can I fix it and run my code?
Here is my Dockerfile
FROM nvidia/cuda:10.1-cudnn7-runtime-ubuntu18.04
MAINTAINER Me <me#me.com>
RUN apt update -yq && \
apt install -yq curl wget unzip git vim cmake sudo
RUN adduser --disabled-password --gecos '' docker && \
adduser docker sudo && \
echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
USER docker
WORKDIR /home/docker/
RUN chmod a+rwx /home/docker/ && \
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh && \
bash Miniconda3-latest-Linux-x86_64.sh -b && rm Miniconda3-latest-Linux-x86_64.sh
ENV PATH /home/docker/miniconda3/bin:$PATH
Run pip install absl-py==0.5.0 atomicwrites==1.2.1 attrs==18.2.0 certifi==2018.8.24 chardet==3.0.4 cycler==0.10.0 docopt==0.6.2 enum34==1.1.6 future==0.16.0 idna==2.7 imageio==2.4.1 jsonpickle==1.2 kiwisolver==1.0.1 matplotlib==3.0.0 mock==2.0.0 more-itertools==4.3.0 mpyq==0.2.5 munch==2.3.2 numpy==1.15.2 pathlib2==2.3.2 pbr==4.3.0 Pillow==5.3.0 pluggy==0.7.1 portpicker==1.2.0 probscale==0.2.3 protobuf==3.6.1 py==1.6.0 pygame==1.9.4 pyparsing==2.2.2 pysc2==3.0.0 pytest==3.8.2 python-dateutil==2.7.3 PyYAML==3.13 requests==2.19.1 s2clientprotocol==4.10.1.75800.0 sacred==0.8.1 scipy==1.1.0 six==1.11.0 sk-video==1.1.10 snakeviz==1.0.0 tensorboard-logger==0.1.0 torch==0.4.1 torchvision==0.2.1 tornado==5.1.1 urllib3==1.23
USER docker
ENTRYPOINT ["/bin/bash"]
Try making the file executable before running it.
as John mentioned to do in the dockerfile
FROM python:latest
COPY src/main.py /usr/local/share/
RUN chmod +x /usr/local/share/src/main.py #<-**--- just add this also
# I have some doubts about the pathing
CMD ["/usr/local/share/src/main.py", "--config=test --results-dir=/home/me/Results"]
You can run a python script in docker by adding this to your docker file:
FROM python:latest
COPY src/main.py /usr/local/share/
CMD ["src/main.py", "--config=test --results-dir=/home/me/Results"]

Permissions in Docker volume

I am struggling with permissions on docker volume, I get access denied for writing.
This is a small part of my docker file
FROM ubuntu:18.04
RUN apt-get update && \
apt-get install -y \
apt-transport-https \
build-essential \
ca-certificates \
curl \
vim && \............
RUN curl -sL https://deb.nodesource.com/setup_8.x | bash - && apt-get install -y nodejs
# Add non-root user
ARG USER=user01
RUN useradd -Um -d /home/$USER -s /bin/bash $USER && \
apt install -y python3-pip && \
pip3 install qrcode[pil]
#Copy that startup.sh into the scripts folder
COPY /scripts/startup.sh /scripts/startup.sh
#Making the startup.sh executable
RUN chmod -v +x /scripts/startup.sh
#Copy node API files
COPY --chown=user1 /node_api/* /home/user1/
USER $USER
WORKDIR /home/$USER
# Expose needed ports
EXPOSE 3000
VOLUME /data_storage
ENTRYPOINT [ "/scripts/startup.sh" ]
Also a small part of my startup.sh
#!/bin/bash
/usr/share/lib/provision.py --enterprise-seed $ENTERPRISE_SEED > config.json
Then my docker builds command:
sudo docker build -t mycontainer .
And the docker run command:
sudo docker run -v data_storage:/home/user01/.client -p 3008:3000 -itd mycontainer
The problem I have is that the Python script will create the folder: /home/user01/.client and it will copy some files in there. That always worked fine. But now I want those files, which are data files, in a volume for backup porpuses. And as I am mapping with my volume I get permissions denied, so the python script is not able to write anymore.
So at the end of my dockerfile this instructions combined with the mapping in the docker run command give me the permission denied:
VOLUME /data_storage
Any suggestions on how to resolve this? some more permissions needed for the "user01"?
Thanks
I was able to resolve my issue by removing the "volume" command from the dockerfile and just doing the mapping at the moment of executing the docker run:
sudo docker run -v data_storage:/home/user01/.client -p 3008:3000 -itd mycontainer

Run docker with jenkins user inside jenkins container on Centos7

I try to run Docker inside my Jenkins slave container on Centos7.1.
This are the steps I performed in my dockerfile:
FROM java:8
ARG user=jenkins
ARG group=jenkins
ARG uid=1000
ARG gid=1000
RUN groupadd -g ${gid} ${group} \
&& useradd -d "$JENKINS_HOME" -u ${uid} -g ${gid} -m -s /bin/bash ${user}
RUN groupadd -g 983 docker \
&& gpasswd -a ${user} docker
So I have a user jenkins (id1000) in a group jenkins (gid1000) + in a group docker (gid983). Why did I chose gid 983?
Well if I check /etc/group on my host I see:
docker:x:983:centos
In my docker-compose script I'm mounting my docker socket so that's why I used the same gid as on my host.
Part of docker-compose:
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /usr/bin/docker:/usr/bin/docker
When I exec inside my container as root:
root#c4af16c386d7:/var/jenkins_home# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
jenkins-slave 1.0 94a5d6606f86 10 minutes
jenkins 2.7.1 b4974ba62598 3 weeks ago 741 MB
java 8-jdk 264282a59a95 7 weeks ago 669.2 MB
But as jenkins user:
Cannot connect to the Docker daemon. Is the docker daemon running on this host?
In my container:
cat /etc/passwd
jenkins:x:1000:1000::/var/jenkins_home:/bin/bash
cat /etc/group
jenkins:x:1000:
docker:x:983:jenkins
Addition:
$ docker exec -it ec52d4125a02 bash
root#ec52d4125a02:/var/jenkins_home# whoami
root
root#ec52d4125a02:/var/jenkins_home# su jenkins
jenkins#ec52d4125a02:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a23521523249 jenkins:2.7.1 "/bin/tini -- /usr/lo" 20 minutes ago Up 20 minutes 0.0.0.0:8080->8080/tcp, 0.0.0.0:32777->22/tcp, 0.0.0.0:32776->50000/tcp jenkins-master
ec52d4125a02 jenkins-slave:1.0 "setup-sshd" 20 minutes ago Up 20 minutes 0.0.0.0:32775->22/tcp, 0.0.0.0:32774->8080/tcp, 0.0.0.0:32773->50000/tcp jenkins-slave
but:
$ docker exec -it -u jenkins ec52d4125a02 bash
jenkins#ec52d4125a02:~$ docker ps
Cannot connect to the Docker daemon. Is the docker daemon running on this host?
In the first case my jenkins user:
uid=1000(jenkins) gid=1000(jenkins) groups=1000(jenkins),983(docker)
In the second case:
uid=1000(jenkins) gid=1000(jenkins) groups=1000(jenkins)
First, why do you need to spin containers from inside another with Jenkins? Here's why this is not a good idea.
Having that said and you still want to go ahead. First thing is that there are several steps you need to take to run Docker inside a Docker container. For example, have you started this container in --priviledged mode?
You should try using Jerome Petazzoni's Docker in Docker as it does everything you need.
You can then combine DInD's stuff with a Jenkins installation. Here's an example that I've put together by mashing up Jerome's DInD with other things and assembling a docker container that has Jenkins, Docker Compose and other useful stuff:
Dockerfile:
FROM ubuntu:xenial
ENV UBUNTU_FLAVOR xenial
#== Ubuntu flavors - common
RUN echo "deb http://archive.ubuntu.com/ubuntu ${UBUNTU_FLAVOR} main universe\n" > /etc/apt/sources.list \
&& echo "deb http://archive.ubuntu.com/ubuntu ${UBUNTU_FLAVOR}-updates main universe\n" >> /etc/apt/sources.list
MAINTAINER Rogério Peixoto
ENV JENKINS_HOME /var/jenkins_home
ENV JENKINS_SLAVE_AGENT_PORT 50000
ARG user=jenkins
ARG group=jenkins
ARG uid=1000
ARG gid=1000
# Jenkins is run with user `jenkins`, uid = 1000
# If you bind mount a volume from the host or a data container,
# ensure you use the same uid
RUN groupadd -g ${gid} ${group} \
&& useradd -d "$JENKINS_HOME" -u ${uid} -g ${gid} -m -s /bin/bash ${user}
# useful stuff.
RUN apt-get update -q && apt-get install -qy \
apt-transport-https \
ca-certificates \
curl \
lxc \
supervisor \
zip \
git \
iptables \
locales \
nano \
make \
openssh-client \
openjdk-8-jdk-headless \
&& rm -rf /var/lib/apt/lists/*
# Install Docker from Docker Inc. repositories.
RUN curl -sSL https://get.docker.com/ | sh
# Install the wrapper script from https://raw.githubusercontent.com/docker/docker/master/hack/dind.
ADD ./wrapdocker /usr/local/bin/wrapdocker
RUN chmod +x /usr/local/bin/wrapdocker
# Define additional metadata for our image.
VOLUME /var/lib/docker
ENV JENKINS_VERSION 2.8
ENV JENKINS_SHA 4d83a40319ecf4eaab2344a18c197bd693080530
RUN mkdir -p /usr/share/jenkins/ \
&& curl -SL http://repo.jenkins-ci.org/public/org/jenkins-ci/main/jenkins-war/${JENKINS_VERSION}/jenkins-war-${JENKINS_VERSION}.war -o /usr/share/jenkins/jenkins.war
# RUN echo "$JENKINS_SHA /usr/share/jenkins/jenkins.war" | sha1sum -c -
ENV JENKINS_UC https://updates.jenkins.io
RUN mkdir -p /usr/share/jenkins/ref \
&& chown -R ${user} "$JENKINS_HOME" /usr/share/jenkins/ref
RUN usermod -a -G docker jenkins
ENV DOCKER_COMPOSE_VERSION 1.8.0-rc1
# Install Docker Compose
RUN curl -L https://github.com/docker/compose/releases/download/${DOCKER_COMPOSE_VERSION}/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
RUN chmod +x /usr/local/bin/docker-compose
RUN apt-get install -y python-pip && pip install supervisor-stdout
EXPOSE 8080
EXPOSE 50000
ADD supervisord.conf /etc/supervisor/conf.d/supervisord.conf
CMD ["/usr/bin/supervisord"]
supervisord.conf
[supervisord]
nodaemon=true
[program:docker]
priority=10
command=wrapdocker
startsecs=0
exitcodes=0,1
[program:chown]
priority=20
command=chown -R jenkins:jenkins /var/jenkins_home
startsecs=0
[program:jenkins]
priority=30
user=jenkins
environment=JENKINS_HOME="/var/jenkins_home",HOME="/var/jenkins_home",USER="jenkins"
command=java -jar /usr/share/jenkins/jenkins.war
stdout_events_enabled = true
stderr_events_enabled = true
[eventlistener:stdout]
command=supervisor_stdout
buffer_size=100
events=PROCESS_LOG
result_handler=supervisor_stdout:event_handler
You can get wrapdocker file here
Put all that in the same directory and build it:
docker build -t my_dind_jenkins .
Then run it:
docker run -d --privileged \
--name=master-jenkins \
-p 8080:8080 \
-p 50000:50000 my_dind_jenkins

How to mount the current working directory onto Docker container?

I am trying to mount the current working directory onto Docker container but isn't working. Here is my Dockerfile
FROM ubuntu:14.04.3
MAINTAINER Upendra Devisetty
RUN apt-get update && apt-get install -y g++ \
make \
git \
zlib1g-dev \
python \
wget \
curl \
python-matplotlib
ENV BINPATH /usr/bin
ENV HISAT2GIT https://upendra_35#bitbucket.org/upendra_35/evolinc.git
RUN git clone "$HISAT2GIT"
RUN chmod +x evolinc/evolinc-part-I.sh && cp evolinc/evolinc-part-I.sh $BINPATH
RUN wget -O- http://cole-trapnell-lab.github.io/cufflinks/assets/downloads/cufflinks-2.2.1.Linux_x86_64.tar.gz | tar xzvf -
RUN wget -O- https://github.com/TransDecoder/TransDecoder/archive/2.0.1.tar.gz | tar xzvf -
RUN wget -O- http://seq.cs.iastate.edu/CAP3/cap3.linux.x86_64.tar | tar vfx -
RUN curl ftp://ftp.ncbi.nlm.nih.gov/blast/executables/blast+/LATEST/ncbi-blast-2.2.31+-x64-linux.tar.gz > ncbi-blast-2.2.31+-x64-linux.tar.gz
RUN tar xvf ncbi-blast-2.2.31+-x64-linux.tar.gz
RUN wget -O- http://ftp.mirrorservice.org/sites/download.sourceforge.net/pub/sourceforge/q/qu/quast/quast-3.0.tar.gz | tar zxvf -
RUN curl -L http://cpanmin.us | perl - App::cpanminus
RUN cpanm URI/Escape.pm
ENV PATH /CAP3/:$PATH
ENV PATH /ncbi-blast-2.2.31+/bin/:$PATH
ENV PATH /quast-3.0/:$PATH
ENV PATH /cufflinks-2.2.1.Linux_x86_64/:$PATH
ENV PATH /TransDecoder-2.0.1/:$PATH
ENTRYPOINT ["/usr/bin/evolinc-part-I.sh"]
CMD ["-h"]
When i run the following to mount the current working directory to make sure everything is doing ok, what i see is that all those dependencies are getting installed in the current working directory.
docker run --rm -v $(pwd):/working-dir -w /working-dir ubuntu/evolinc:2.0 -c cuffcompare_out_annot_no_annot.combined.gtf -g Brassica_rapa_v1.2_genome.fa -r Brassica_rapa_v1.2_cds.fa -b TE_RNA_transcripts.fa
I thought, they should only be installed on the container and only the output is going to generate in the current working directory. Sorry, i am very new to Docker and i would need some help with this....
Mouting a volume in docker (-v) allows a container to share directories/volumes with the host. Therefore when changing the volume you are in fact changing the mounted directory. If you wanted to copy some files, rather than point at them, you may need to build your own container and use the COPY or ADD instructions.
What is the difference between the `COPY` and `ADD` commands in a Dockerfile?

Start sshd automatically with docker container

Given:
container based on ubuntu:13.10
installed ssh (via apt-get install ssh)
Problem: each when I start container I have to run sshd manually service ssh start
Tried: update-rc.d ssh defaults, but it does not helps.
Question: how to setup container to start sshd service automatically during container start?
Just try:
ENTRYPOINT service ssh restart && bash
in your dockerfile, it works fun for me!
more details here: How to automatically start a service when running a docker container?
Here is a Dockerfile which installs ssh server and runs it:
# Build Ubuntu image with base functionality.
FROM ubuntu:focal AS ubuntu-base
ENV DEBIAN_FRONTEND noninteractive
SHELL ["/bin/bash", "-o", "pipefail", "-c"]
# Setup the default user.
RUN useradd -rm -d /home/ubuntu -s /bin/bash -g root -G sudo ubuntu
RUN echo 'ubuntu:ubuntu' | chpasswd
USER ubuntu
WORKDIR /home/ubuntu
# Build image with Python and SSHD.
FROM ubuntu-base AS ubuntu-with-sshd
USER root
# Install required tools.
RUN apt-get -qq update \
&& apt-get -qq --no-install-recommends install vim-tiny=2:8.1.* \
&& apt-get -qq --no-install-recommends install sudo=1.8.* \
&& apt-get -qq --no-install-recommends install python3-pip=20.0.* \
&& apt-get -qq --no-install-recommends install openssh-server=1:8.* \
&& apt-get -qq clean \
&& rm -rf /var/lib/apt/lists/*
# Configure SSHD.
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
RUN mkdir /var/run/sshd
RUN bash -c 'install -m755 <(printf "#!/bin/sh\nexit 0") /usr/sbin/policy-rc.d'
RUN ex +'%s/^#\zeListenAddress/\1/g' -scwq /etc/ssh/sshd_config
RUN ex +'%s/^#\zeHostKey .*ssh_host_.*_key/\1/g' -scwq /etc/ssh/sshd_config
RUN RUNLEVEL=1 dpkg-reconfigure openssh-server
RUN ssh-keygen -A -v
RUN update-rc.d ssh defaults
# Configure sudo.
RUN ex +"%s/^%sudo.*$/%sudo ALL=(ALL:ALL) NOPASSWD:ALL/g" -scwq! /etc/sudoers
# Generate and configure user keys.
USER ubuntu
RUN ssh-keygen -t ed25519 -f ~/.ssh/id_ed25519
#COPY --chown=ubuntu:root "./files/authorized_keys" /home/ubuntu/.ssh/authorized_keys
# Setup default command and/or parameters.
EXPOSE 22
CMD ["/usr/bin/sudo", "/usr/sbin/sshd", "-D", "-o", "ListenAddress=0.0.0.0"]
Build with the following command:
docker build --target ubuntu-with-sshd -t ubuntu-with-sshd .
Then run with:
docker run -p 2222:22 ubuntu-with-sshd
To connect to container via local port, run: ssh -v localhost -p 2222.
To check for container IP address, use docker ps and docker inspect.
Here is example of docker-compose.yml file:
---
version: '3.4'
services:
ubuntu-with-sshd:
image: "ubuntu-with-sshd:latest"
build:
context: "."
target: "ubuntu-with-sshd"
networks:
mynet:
ipv4_address: 172.16.128.2
ports:
- "2222:22"
privileged: true # Required for /usr/sbin/init
networks:
mynet:
ipam:
config:
- subnet: 172.16.128.0/24
To run, type:
docker-compose up --build
I think the correct way to do it would follow docker's instructions to dockerizing the ssh service.
And in correlation to the specific question, the following lines added at the end of the dockerfile will achieve what you were looking for:
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
Dockerize a SSHD service
I have created dockerfiler to run ssh inside. I think it is not secure, but for testing/development in DMZ it could be ok:
FROM ubuntu:20.04
USER root
# change root password to `ubuntu`
RUN echo 'root:ubuntu' | chpasswd
ENV DEBIAN_FRONTEND noninteractive
# install ssh server
RUN apt-get update && apt-get install -y \
openssh-server sudo \
&& rm -rf /var/lib/apt/lists/*
# workdir for ssh
RUN mkdir -p /run/sshd
# generate server keys
RUN ssh-keygen -A
# allow root to login
RUN sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/g' /etc/ssh/sshd_config
EXPOSE 22
# run ssh server
CMD ["/usr/sbin/sshd", "-D", "-o", "ListenAddress=0.0.0.0"]
You can start ssh server when starting your container probably. Something like this:
docker run ubuntu /usr/sbin/sshd -D
Check out this official tutorial.
This is what I did:
FROM nginx
# install gosu
# seealso:
# https://docs.docker.com/develop/develop-images/dockerfile_best-practices/
# https://github.com/tianon/gosu/blob/master/INSTALL.md
# https://github.com/tianon/gosu
RUN set -eux; \
apt-get update; \
apt-get install -y gosu; \
rm -rf /var/lib/apt/lists/*; \
# verify that the binary works
gosu nobody true
ENV myenv='default'
RUN apt-get update && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
COPY entrypoint.sh /entrypoint.sh
ENV AIRFLOW_HOME=/usr/local/airflow
RUN mkdir $AIRFLOW_HOME
RUN groupadd --gid 8080 airflow
RUN useradd --uid 8080 --gid 8080 -ms /bin/bash -d $AIRFLOW_HOME airflow
RUN echo 'airflow:mypass' | chpasswd
EXPOSE 22
CMD ["/entrypoint.sh"]
Inside entrypoint.sh:
echo "starting ssh as root"
gosu root service ssh start &
#gosu root /usr/sbin/sshd -D &
echo "starting tail user"
exec gosu airflow tail -f /dev/null
Well, I used the following command to solve that
docker run -i -t mycentos6 /bin/bash -c '/etc/init.d/sshd start && /bin/bash'
First login to your container and write an initialization script /bin/init as following:
# execute in the container
cat <<EOT >> /bin/init
#!/bin/bash
service ssh start
while true; do sleep 1; done
EOT
Then make the root user is permitted to logging via ssh:
# execute in the container
echo "PermitRootLogin yes" >> /etc/ssh/sshd_config
Commit the container to a new image after exiting from the container:
# execute in the server
docker commit <YOUR_CONTAINER> <ANY_REPO>:<ANY_TAG>
From now on, as long as you run your container with the following command, the ssh service will be automatically started.
# execute in the server
docker run -it -d --name <NAME> <REPO>:<TAG> /bin/init
docker exec -it <NAME> /bin/bash
Done.
You can try a more elegant way to do that with phusion/baseimage-docker
https://github.com/phusion/baseimage-docker#readme

Resources