Given:
container based on ubuntu:13.10
installed ssh (via apt-get install ssh)
Problem: each when I start container I have to run sshd manually service ssh start
Tried: update-rc.d ssh defaults, but it does not helps.
Question: how to setup container to start sshd service automatically during container start?
Just try:
ENTRYPOINT service ssh restart && bash
in your dockerfile, it works fun for me!
more details here: How to automatically start a service when running a docker container?
Here is a Dockerfile which installs ssh server and runs it:
# Build Ubuntu image with base functionality.
FROM ubuntu:focal AS ubuntu-base
ENV DEBIAN_FRONTEND noninteractive
SHELL ["/bin/bash", "-o", "pipefail", "-c"]
# Setup the default user.
RUN useradd -rm -d /home/ubuntu -s /bin/bash -g root -G sudo ubuntu
RUN echo 'ubuntu:ubuntu' | chpasswd
USER ubuntu
WORKDIR /home/ubuntu
# Build image with Python and SSHD.
FROM ubuntu-base AS ubuntu-with-sshd
USER root
# Install required tools.
RUN apt-get -qq update \
&& apt-get -qq --no-install-recommends install vim-tiny=2:8.1.* \
&& apt-get -qq --no-install-recommends install sudo=1.8.* \
&& apt-get -qq --no-install-recommends install python3-pip=20.0.* \
&& apt-get -qq --no-install-recommends install openssh-server=1:8.* \
&& apt-get -qq clean \
&& rm -rf /var/lib/apt/lists/*
# Configure SSHD.
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
RUN mkdir /var/run/sshd
RUN bash -c 'install -m755 <(printf "#!/bin/sh\nexit 0") /usr/sbin/policy-rc.d'
RUN ex +'%s/^#\zeListenAddress/\1/g' -scwq /etc/ssh/sshd_config
RUN ex +'%s/^#\zeHostKey .*ssh_host_.*_key/\1/g' -scwq /etc/ssh/sshd_config
RUN RUNLEVEL=1 dpkg-reconfigure openssh-server
RUN ssh-keygen -A -v
RUN update-rc.d ssh defaults
# Configure sudo.
RUN ex +"%s/^%sudo.*$/%sudo ALL=(ALL:ALL) NOPASSWD:ALL/g" -scwq! /etc/sudoers
# Generate and configure user keys.
USER ubuntu
RUN ssh-keygen -t ed25519 -f ~/.ssh/id_ed25519
#COPY --chown=ubuntu:root "./files/authorized_keys" /home/ubuntu/.ssh/authorized_keys
# Setup default command and/or parameters.
EXPOSE 22
CMD ["/usr/bin/sudo", "/usr/sbin/sshd", "-D", "-o", "ListenAddress=0.0.0.0"]
Build with the following command:
docker build --target ubuntu-with-sshd -t ubuntu-with-sshd .
Then run with:
docker run -p 2222:22 ubuntu-with-sshd
To connect to container via local port, run: ssh -v localhost -p 2222.
To check for container IP address, use docker ps and docker inspect.
Here is example of docker-compose.yml file:
---
version: '3.4'
services:
ubuntu-with-sshd:
image: "ubuntu-with-sshd:latest"
build:
context: "."
target: "ubuntu-with-sshd"
networks:
mynet:
ipv4_address: 172.16.128.2
ports:
- "2222:22"
privileged: true # Required for /usr/sbin/init
networks:
mynet:
ipam:
config:
- subnet: 172.16.128.0/24
To run, type:
docker-compose up --build
I think the correct way to do it would follow docker's instructions to dockerizing the ssh service.
And in correlation to the specific question, the following lines added at the end of the dockerfile will achieve what you were looking for:
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
Dockerize a SSHD service
I have created dockerfiler to run ssh inside. I think it is not secure, but for testing/development in DMZ it could be ok:
FROM ubuntu:20.04
USER root
# change root password to `ubuntu`
RUN echo 'root:ubuntu' | chpasswd
ENV DEBIAN_FRONTEND noninteractive
# install ssh server
RUN apt-get update && apt-get install -y \
openssh-server sudo \
&& rm -rf /var/lib/apt/lists/*
# workdir for ssh
RUN mkdir -p /run/sshd
# generate server keys
RUN ssh-keygen -A
# allow root to login
RUN sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/g' /etc/ssh/sshd_config
EXPOSE 22
# run ssh server
CMD ["/usr/sbin/sshd", "-D", "-o", "ListenAddress=0.0.0.0"]
You can start ssh server when starting your container probably. Something like this:
docker run ubuntu /usr/sbin/sshd -D
Check out this official tutorial.
This is what I did:
FROM nginx
# install gosu
# seealso:
# https://docs.docker.com/develop/develop-images/dockerfile_best-practices/
# https://github.com/tianon/gosu/blob/master/INSTALL.md
# https://github.com/tianon/gosu
RUN set -eux; \
apt-get update; \
apt-get install -y gosu; \
rm -rf /var/lib/apt/lists/*; \
# verify that the binary works
gosu nobody true
ENV myenv='default'
RUN apt-get update && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
COPY entrypoint.sh /entrypoint.sh
ENV AIRFLOW_HOME=/usr/local/airflow
RUN mkdir $AIRFLOW_HOME
RUN groupadd --gid 8080 airflow
RUN useradd --uid 8080 --gid 8080 -ms /bin/bash -d $AIRFLOW_HOME airflow
RUN echo 'airflow:mypass' | chpasswd
EXPOSE 22
CMD ["/entrypoint.sh"]
Inside entrypoint.sh:
echo "starting ssh as root"
gosu root service ssh start &
#gosu root /usr/sbin/sshd -D &
echo "starting tail user"
exec gosu airflow tail -f /dev/null
Well, I used the following command to solve that
docker run -i -t mycentos6 /bin/bash -c '/etc/init.d/sshd start && /bin/bash'
First login to your container and write an initialization script /bin/init as following:
# execute in the container
cat <<EOT >> /bin/init
#!/bin/bash
service ssh start
while true; do sleep 1; done
EOT
Then make the root user is permitted to logging via ssh:
# execute in the container
echo "PermitRootLogin yes" >> /etc/ssh/sshd_config
Commit the container to a new image after exiting from the container:
# execute in the server
docker commit <YOUR_CONTAINER> <ANY_REPO>:<ANY_TAG>
From now on, as long as you run your container with the following command, the ssh service will be automatically started.
# execute in the server
docker run -it -d --name <NAME> <REPO>:<TAG> /bin/init
docker exec -it <NAME> /bin/bash
Done.
You can try a more elegant way to do that with phusion/baseimage-docker
https://github.com/phusion/baseimage-docker#readme
Related
I have the following folder structure
db
- build.sh
- Dockerfile
- file.txt
build.sh
PGUID=$(id -u postgres)
PGGID=$(id -g postgres)
CS=$(lsb_release -cs)
docker build --build-arg POSTGRES_UID=${PGUID} --build-arg POSTGRES_GID=${PGGID} --build-arg LSB_CS=${CS} -t postgres:1.0 .
docker run -d postgres:1.0 sh -c "cp file.txt ./file.txt"
Dockerfile
FROM ubuntu:19.10
RUN apt-get update
ARG LSB_CS=$LSB_CS
RUN echo "lsb_release: ${LSB_CS}"
RUN apt-get install -y sudo \
&& sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt eoan-pgdg main" > /etc/apt/sources.list.d/pgdg.list'
RUN apt-get install -y wget \
&& apt-get install -y gnupg \
&& wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | \
sudo apt-key add -
RUN apt-get update
RUN apt-get install tzdata -y
ARG POSTGRES_GID=128
RUN groupadd -g $POSTGRES_GID postgres
ARG POSTGRES_UID=122
RUN useradd -r -g postgres -u $POSTGRES_UID postgres
RUN apt-get update && apt-get install -y postgresql-10
RUN locale-gen en_US.UTF-8
RUN echo "host all all 0.0.0.0/0 md5" >> /etc/postgresql/10/main/pg_hba.conf
RUN echo "listen_addresses='*'" >> /etc/postgresql/10/main/postgresql.conf
EXPOSE 5432
CMD ["pg_ctlcluster", "--foreground", "10", "main", "start"]
file.txt
"Hello Hello"
Basically i want to be able to build my image, start my container and copy file.txt in my local to the docker container.
I tried doing it like this docker run -d postgres:1.0 sh -c "cp file.txt ./file.txt" but it doesn't work. I have also tried other options as well but also not working.
At the moment when i run my script sh build.sh, it runs everything and even starts a container but doesn't copy over that file to the container.
Any help on this is appreciated
Sounds like what you want is a mounting the file into a location of your docker container.
You can mount a local directory into your container and access it from the inside:
mkdir /some/dirname
copy filet.txt /some/dirname/
# run as demon, mount /some/dirname to /directory/in/container, run sh
docker run -d -v /some/dirname:/directory/in/container postgres:1.0 sh
Minimal working example:
On windows host:
d:\>mkdir d:\temp
d:\>mkdir d:\temp\docker
d:\>mkdir d:\temp\docker\dir
d:\>echo "SomeDataInFile" > d:\temp\docker\dir\file.txt
# mount one local file to root in docker container, renaming it in the process
d:\>docker run -it -v d:\temp\docker\dir\file.txt:/other_file.txt alpine
In docker container:
/ # ls
bin etc lib mnt other_file.txt root sbin sys usr
dev home media opt proc run srv tmp var
/ # cat other_file.txt
"SomeDataInFile"
/ # echo 32 >> other_file.txt
/ # cat other_file.txt
"SomeDataInFile"
32
/ # exit
this will mount the (outside) directory/file as folder/file inside your container. If you specify a directory/file inside your docker that already exists it will be shadowed.
Back on windows host:
d:\>more d:\temp\docker\dir\file.txt
"SomeDataInFile"
32
See f.e Docker volumes vs mount bind - Use cases on Serverfault for more info about ways to bind mount or use volumes.
I'm having a Dockerfile
FROM centos:7
ENV container docker
RUN yum -y update && yum -y install net-tools && yum -y install initscripts && yum -y install epel-release && yum -y install nginx && yum clean all
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
# expose Nginx ports http/https
EXPOSE 80 443
RUN curl https://www.sheldonbrown.com/web_sample1.html > /usr/share/nginx/index.html
RUN mv /usr/share/nginx/index.html /usr/share/nginx/html/index.html -f
RUN mkdir -p /local/nginx
RUN touch /local/nginx/start.sh
RUN chmod 755 /local/nginx/start.sh
RUN echo "#!/bin/bash" >> /local/nginx/start.sh
RUN echo "nginx" >> /local/nginx/start.sh
RUN echo "tail -f /var/log/nginx/access.log" >> /local/nginx/start.sh
ENTRYPOINT ["/bin/bash", "-c", "/local/nginx/start.sh"]
I'm building it with docker build -t "my_nginx" .
And then running it with docker run -i -t --rm -p 8888:80 --name nginx "my_nginx"
https://localhost:8888/ shows the page but no logging is shown.
If I press Ctrl-C, nginx is stopped but the tail on the logging is shown.
If I press Ctrl-C again, the container is gone.
Question: How can I let nginx run AND show the tail on the logging (which is preferably also visible using the "docker logs"-command)
The easiest way to accomplish this is just to use the Docker Hub nginx image, which deals with this for you. A Dockerfile that could be as little as
FROM nginx:1.19
# Specifically use ADD because it will fetch a URL
ADD https://www.sheldonbrown.com/web_sample1.html /usr/share/nginx/html/index.html
If you look at its Dockerfile it actually uses symbolic links to cause Nginx's "normal" logs to go to the container stdout
# forward request and error logs to docker log collector
RUN ln -sf /dev/stdout /var/log/nginx/access.log \
&& ln -sf /dev/stderr /var/log/nginx/error.log
That gets around most of the mechanics in your Dockerfile: you can just run nginx as the main container command without trying to use a second process to cat logs. You can basically trim out the entire last half, and get
FROM centos:7
ENV container docker
RUN yum -y install epel-release \
&& yum -y install net-tools initscripts nginx \
&& yum clean all
RUN echo "daemon off;" >> /etc/nginx/nginx.conf \
&& ln -s /dev/stdout /var/log/nginx/access.log \
&& ln -s /dev/stderr /var/log/nginx/error.log
RUN curl -o /usr/share/nginx/html/index.html https://www.sheldonbrown.com/web_sample1.html
EXPOSE 80 443
CMD ["nginx"]
Yet another possibility here (with any of these images) is to mount your own volume over the container's /var/log/nginx directory. That gives you your own host-visible directory of logs that you can inspect at your convenience.
mkdir logs
docker run -v $PWD/logs:/var/log/nginx -d ...
less logs/access.log
(In the shell script you construct in the Dockerfile, you use the Nginx daemon off directive to run as a foreground process, which means the nginx command will never exit on its own. That in turn means the script never advances to the tail line, which is why you don't get logs out.)
You should remove the tail command from the Dockerfile, run the container with docker run -it -d --rm -p 8888:80 --name nginx "my_nginx" and then use docker logs -f nginx.
I am struggling with permissions on docker volume, I get access denied for writing.
This is a small part of my docker file
FROM ubuntu:18.04
RUN apt-get update && \
apt-get install -y \
apt-transport-https \
build-essential \
ca-certificates \
curl \
vim && \............
RUN curl -sL https://deb.nodesource.com/setup_8.x | bash - && apt-get install -y nodejs
# Add non-root user
ARG USER=user01
RUN useradd -Um -d /home/$USER -s /bin/bash $USER && \
apt install -y python3-pip && \
pip3 install qrcode[pil]
#Copy that startup.sh into the scripts folder
COPY /scripts/startup.sh /scripts/startup.sh
#Making the startup.sh executable
RUN chmod -v +x /scripts/startup.sh
#Copy node API files
COPY --chown=user1 /node_api/* /home/user1/
USER $USER
WORKDIR /home/$USER
# Expose needed ports
EXPOSE 3000
VOLUME /data_storage
ENTRYPOINT [ "/scripts/startup.sh" ]
Also a small part of my startup.sh
#!/bin/bash
/usr/share/lib/provision.py --enterprise-seed $ENTERPRISE_SEED > config.json
Then my docker builds command:
sudo docker build -t mycontainer .
And the docker run command:
sudo docker run -v data_storage:/home/user01/.client -p 3008:3000 -itd mycontainer
The problem I have is that the Python script will create the folder: /home/user01/.client and it will copy some files in there. That always worked fine. But now I want those files, which are data files, in a volume for backup porpuses. And as I am mapping with my volume I get permissions denied, so the python script is not able to write anymore.
So at the end of my dockerfile this instructions combined with the mapping in the docker run command give me the permission denied:
VOLUME /data_storage
Any suggestions on how to resolve this? some more permissions needed for the "user01"?
Thanks
I was able to resolve my issue by removing the "volume" command from the dockerfile and just doing the mapping at the moment of executing the docker run:
sudo docker run -v data_storage:/home/user01/.client -p 3008:3000 -itd mycontainer
I'm trying to create a docker file (base os must be Centos) that will install mariadb, start mariadb, and keep mariadb running. So that I can use the container in gitlab to run my integration tests (Java). This is what I have so far
FROM centos:7
ENV container docker
RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == \
systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
VOLUME [ "/sys/fs/cgroup" ]
CMD ["/usr/sbin/init"]
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
# Install epel and java
RUN yum install -y java-1.8.0-openjdk java-1.8.0-openjdk-devel wget
ENV JAVA_HOME /usr/lib/jvm/java-1.8.0-openjdk/
EXPOSE 8080
EXPOSE 3306
# install mariadb
RUN yum -y install mariadb
RUN yum -y install mariadb-server
RUN systemctl start mariadb
ENTRYPOINT tail -f /dev/null
The error I'm getting is
Failed to get D-Bus connection: Operation not permitted
You can do something like this:
FROM centos/mariadb-102-centos7
USER root
# Install epel and java
RUN yum install -y java-1.8.0-openjdk java-1.8.0-openjdk-devel wget
ENV JAVA_HOME /usr/lib/jvm/java-1.8.0-openjdk/
You can mount your code folder into this container and execute it with docker exec.
It is recommended however you use two different containers: one for the db and one for your code. You can then pass the code container the env vars required to connect to the db container.
nothing is running by default in containers including systemd so you cannot use systemd to start mariadb
if we reference the official mariadb dockerfile, we can find that you can start mariadb by adding CMD ["mysqld"] to our dockerfile.
you must also make sure to install mariadb in your container with RUN yum -y mariadb-server mariadb-client as it is not installed by default either
I try to run Docker inside my Jenkins slave container on Centos7.1.
This are the steps I performed in my dockerfile:
FROM java:8
ARG user=jenkins
ARG group=jenkins
ARG uid=1000
ARG gid=1000
RUN groupadd -g ${gid} ${group} \
&& useradd -d "$JENKINS_HOME" -u ${uid} -g ${gid} -m -s /bin/bash ${user}
RUN groupadd -g 983 docker \
&& gpasswd -a ${user} docker
So I have a user jenkins (id1000) in a group jenkins (gid1000) + in a group docker (gid983). Why did I chose gid 983?
Well if I check /etc/group on my host I see:
docker:x:983:centos
In my docker-compose script I'm mounting my docker socket so that's why I used the same gid as on my host.
Part of docker-compose:
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /usr/bin/docker:/usr/bin/docker
When I exec inside my container as root:
root#c4af16c386d7:/var/jenkins_home# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
jenkins-slave 1.0 94a5d6606f86 10 minutes
jenkins 2.7.1 b4974ba62598 3 weeks ago 741 MB
java 8-jdk 264282a59a95 7 weeks ago 669.2 MB
But as jenkins user:
Cannot connect to the Docker daemon. Is the docker daemon running on this host?
In my container:
cat /etc/passwd
jenkins:x:1000:1000::/var/jenkins_home:/bin/bash
cat /etc/group
jenkins:x:1000:
docker:x:983:jenkins
Addition:
$ docker exec -it ec52d4125a02 bash
root#ec52d4125a02:/var/jenkins_home# whoami
root
root#ec52d4125a02:/var/jenkins_home# su jenkins
jenkins#ec52d4125a02:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a23521523249 jenkins:2.7.1 "/bin/tini -- /usr/lo" 20 minutes ago Up 20 minutes 0.0.0.0:8080->8080/tcp, 0.0.0.0:32777->22/tcp, 0.0.0.0:32776->50000/tcp jenkins-master
ec52d4125a02 jenkins-slave:1.0 "setup-sshd" 20 minutes ago Up 20 minutes 0.0.0.0:32775->22/tcp, 0.0.0.0:32774->8080/tcp, 0.0.0.0:32773->50000/tcp jenkins-slave
but:
$ docker exec -it -u jenkins ec52d4125a02 bash
jenkins#ec52d4125a02:~$ docker ps
Cannot connect to the Docker daemon. Is the docker daemon running on this host?
In the first case my jenkins user:
uid=1000(jenkins) gid=1000(jenkins) groups=1000(jenkins),983(docker)
In the second case:
uid=1000(jenkins) gid=1000(jenkins) groups=1000(jenkins)
First, why do you need to spin containers from inside another with Jenkins? Here's why this is not a good idea.
Having that said and you still want to go ahead. First thing is that there are several steps you need to take to run Docker inside a Docker container. For example, have you started this container in --priviledged mode?
You should try using Jerome Petazzoni's Docker in Docker as it does everything you need.
You can then combine DInD's stuff with a Jenkins installation. Here's an example that I've put together by mashing up Jerome's DInD with other things and assembling a docker container that has Jenkins, Docker Compose and other useful stuff:
Dockerfile:
FROM ubuntu:xenial
ENV UBUNTU_FLAVOR xenial
#== Ubuntu flavors - common
RUN echo "deb http://archive.ubuntu.com/ubuntu ${UBUNTU_FLAVOR} main universe\n" > /etc/apt/sources.list \
&& echo "deb http://archive.ubuntu.com/ubuntu ${UBUNTU_FLAVOR}-updates main universe\n" >> /etc/apt/sources.list
MAINTAINER Rogério Peixoto
ENV JENKINS_HOME /var/jenkins_home
ENV JENKINS_SLAVE_AGENT_PORT 50000
ARG user=jenkins
ARG group=jenkins
ARG uid=1000
ARG gid=1000
# Jenkins is run with user `jenkins`, uid = 1000
# If you bind mount a volume from the host or a data container,
# ensure you use the same uid
RUN groupadd -g ${gid} ${group} \
&& useradd -d "$JENKINS_HOME" -u ${uid} -g ${gid} -m -s /bin/bash ${user}
# useful stuff.
RUN apt-get update -q && apt-get install -qy \
apt-transport-https \
ca-certificates \
curl \
lxc \
supervisor \
zip \
git \
iptables \
locales \
nano \
make \
openssh-client \
openjdk-8-jdk-headless \
&& rm -rf /var/lib/apt/lists/*
# Install Docker from Docker Inc. repositories.
RUN curl -sSL https://get.docker.com/ | sh
# Install the wrapper script from https://raw.githubusercontent.com/docker/docker/master/hack/dind.
ADD ./wrapdocker /usr/local/bin/wrapdocker
RUN chmod +x /usr/local/bin/wrapdocker
# Define additional metadata for our image.
VOLUME /var/lib/docker
ENV JENKINS_VERSION 2.8
ENV JENKINS_SHA 4d83a40319ecf4eaab2344a18c197bd693080530
RUN mkdir -p /usr/share/jenkins/ \
&& curl -SL http://repo.jenkins-ci.org/public/org/jenkins-ci/main/jenkins-war/${JENKINS_VERSION}/jenkins-war-${JENKINS_VERSION}.war -o /usr/share/jenkins/jenkins.war
# RUN echo "$JENKINS_SHA /usr/share/jenkins/jenkins.war" | sha1sum -c -
ENV JENKINS_UC https://updates.jenkins.io
RUN mkdir -p /usr/share/jenkins/ref \
&& chown -R ${user} "$JENKINS_HOME" /usr/share/jenkins/ref
RUN usermod -a -G docker jenkins
ENV DOCKER_COMPOSE_VERSION 1.8.0-rc1
# Install Docker Compose
RUN curl -L https://github.com/docker/compose/releases/download/${DOCKER_COMPOSE_VERSION}/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
RUN chmod +x /usr/local/bin/docker-compose
RUN apt-get install -y python-pip && pip install supervisor-stdout
EXPOSE 8080
EXPOSE 50000
ADD supervisord.conf /etc/supervisor/conf.d/supervisord.conf
CMD ["/usr/bin/supervisord"]
supervisord.conf
[supervisord]
nodaemon=true
[program:docker]
priority=10
command=wrapdocker
startsecs=0
exitcodes=0,1
[program:chown]
priority=20
command=chown -R jenkins:jenkins /var/jenkins_home
startsecs=0
[program:jenkins]
priority=30
user=jenkins
environment=JENKINS_HOME="/var/jenkins_home",HOME="/var/jenkins_home",USER="jenkins"
command=java -jar /usr/share/jenkins/jenkins.war
stdout_events_enabled = true
stderr_events_enabled = true
[eventlistener:stdout]
command=supervisor_stdout
buffer_size=100
events=PROCESS_LOG
result_handler=supervisor_stdout:event_handler
You can get wrapdocker file here
Put all that in the same directory and build it:
docker build -t my_dind_jenkins .
Then run it:
docker run -d --privileged \
--name=master-jenkins \
-p 8080:8080 \
-p 50000:50000 my_dind_jenkins