I have the following folder structure
db
- build.sh
- Dockerfile
- file.txt
build.sh
PGUID=$(id -u postgres)
PGGID=$(id -g postgres)
CS=$(lsb_release -cs)
docker build --build-arg POSTGRES_UID=${PGUID} --build-arg POSTGRES_GID=${PGGID} --build-arg LSB_CS=${CS} -t postgres:1.0 .
docker run -d postgres:1.0 sh -c "cp file.txt ./file.txt"
Dockerfile
FROM ubuntu:19.10
RUN apt-get update
ARG LSB_CS=$LSB_CS
RUN echo "lsb_release: ${LSB_CS}"
RUN apt-get install -y sudo \
&& sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt eoan-pgdg main" > /etc/apt/sources.list.d/pgdg.list'
RUN apt-get install -y wget \
&& apt-get install -y gnupg \
&& wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | \
sudo apt-key add -
RUN apt-get update
RUN apt-get install tzdata -y
ARG POSTGRES_GID=128
RUN groupadd -g $POSTGRES_GID postgres
ARG POSTGRES_UID=122
RUN useradd -r -g postgres -u $POSTGRES_UID postgres
RUN apt-get update && apt-get install -y postgresql-10
RUN locale-gen en_US.UTF-8
RUN echo "host all all 0.0.0.0/0 md5" >> /etc/postgresql/10/main/pg_hba.conf
RUN echo "listen_addresses='*'" >> /etc/postgresql/10/main/postgresql.conf
EXPOSE 5432
CMD ["pg_ctlcluster", "--foreground", "10", "main", "start"]
file.txt
"Hello Hello"
Basically i want to be able to build my image, start my container and copy file.txt in my local to the docker container.
I tried doing it like this docker run -d postgres:1.0 sh -c "cp file.txt ./file.txt" but it doesn't work. I have also tried other options as well but also not working.
At the moment when i run my script sh build.sh, it runs everything and even starts a container but doesn't copy over that file to the container.
Any help on this is appreciated
Sounds like what you want is a mounting the file into a location of your docker container.
You can mount a local directory into your container and access it from the inside:
mkdir /some/dirname
copy filet.txt /some/dirname/
# run as demon, mount /some/dirname to /directory/in/container, run sh
docker run -d -v /some/dirname:/directory/in/container postgres:1.0 sh
Minimal working example:
On windows host:
d:\>mkdir d:\temp
d:\>mkdir d:\temp\docker
d:\>mkdir d:\temp\docker\dir
d:\>echo "SomeDataInFile" > d:\temp\docker\dir\file.txt
# mount one local file to root in docker container, renaming it in the process
d:\>docker run -it -v d:\temp\docker\dir\file.txt:/other_file.txt alpine
In docker container:
/ # ls
bin etc lib mnt other_file.txt root sbin sys usr
dev home media opt proc run srv tmp var
/ # cat other_file.txt
"SomeDataInFile"
/ # echo 32 >> other_file.txt
/ # cat other_file.txt
"SomeDataInFile"
32
/ # exit
this will mount the (outside) directory/file as folder/file inside your container. If you specify a directory/file inside your docker that already exists it will be shadowed.
Back on windows host:
d:\>more d:\temp\docker\dir\file.txt
"SomeDataInFile"
32
See f.e Docker volumes vs mount bind - Use cases on Serverfault for more info about ways to bind mount or use volumes.
my dockerfile
FROM openjdk:11-jdk
RUN apt-get update \
&& apt-get install --no-install-recommends -y git openssh-server \
&& rm -rf /var/lib/apt/lists/*
RUN groupadd --gid 3000 jenkins \
&& useradd --uid 3000 --gid jenkins --shell /bin/bash --create-home jenkins
RUN mkdir -p /var/run/sshd
EXPOSE 22
ENTRYPOINT /usr/sbin/sshd -D && bash
docker build --tag sample .
i tried to start it with jenkins user
docker run -u 3000:3000 sample
which returns
Could not load host key: /etc/ssh/ssh_host_rsa_key
Could not load host key: /etc/ssh/ssh_host_ecdsa_key
Could not load host key: /etc/ssh/ssh_host_ed25519_key
i've read all similar questions on stackoverflow and nothing works in my case.
was tried
RUN yes 'y' | ssh-keygen -b 1024 -t rsa -f /etc/ssh/all_needed_keys -N ''
also doesn't work
RUN /usr/bin/ssh-keygen -A
I'm trying to create a Jenkins Docker agent that has Go.
The following is my Dockerfile.
After I build it, if I try: docker run myimage:0.0.1 go version returns the Go version, however if I try this, it doesn't find Go at all.
docker run --privileged --dns 9.0.128.50 --dns 9.0.130.50 -d -P --name slave myimage:0.0.1
docker ps ## grab the port number
ssh -p PORT_NUMBER jenkins#localhost
What am I missing in order to make Go available under the Jenkins user?
FROM golang:1.11.5-alpine
RUN apk add --no-cache \
bash \
curl \
wget \
git \
openssh \
tar
COPY ssh/*key /etc/ssh/
COPY skel/ /home/jenkins
COPY id_rsa /home/jenkins/.ssh/id_rsa
COPY id_rsa.pub /home/jenkins/.ssh/id_rsa.pub
RUN addgroup docker \
&& adduser -s /bin/bash -h /home/jenkins -G docker -D jenkins \
&& echo "jenkins ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers \
&& echo "jenkins:jenkinspass" | chpasswd \
&& chmod u+s /bin/ping \
&& chown -R jenkins:docker /home/jenkins \
&& mv /etc/profile.d/color_prompt /etc/profile.d/color_prompt.sh \
&& mv /bin/sh /bin/sh.bak \
&& ln -s /bin/bash /bin/sh
# Standard SSH port
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
If you run:
docker run myimage:0.0.1 which go
You will see that go executable in path /usr/local/go/bin/go
If you connect as jenkins user via ssh and run /usr/local/go/bin/go version all work as well.
Conclusion:
Go installation provided as root user
jenkins user added after go installed and haven't /usr/local/go/bin/go in his $PATH environment variable.
Solution:
Add /usr/local/go/bin/go to $PATH for user jenkins
Use go executable with full path.
Given:
container based on ubuntu:13.10
installed ssh (via apt-get install ssh)
Problem: each when I start container I have to run sshd manually service ssh start
Tried: update-rc.d ssh defaults, but it does not helps.
Question: how to setup container to start sshd service automatically during container start?
Just try:
ENTRYPOINT service ssh restart && bash
in your dockerfile, it works fun for me!
more details here: How to automatically start a service when running a docker container?
Here is a Dockerfile which installs ssh server and runs it:
# Build Ubuntu image with base functionality.
FROM ubuntu:focal AS ubuntu-base
ENV DEBIAN_FRONTEND noninteractive
SHELL ["/bin/bash", "-o", "pipefail", "-c"]
# Setup the default user.
RUN useradd -rm -d /home/ubuntu -s /bin/bash -g root -G sudo ubuntu
RUN echo 'ubuntu:ubuntu' | chpasswd
USER ubuntu
WORKDIR /home/ubuntu
# Build image with Python and SSHD.
FROM ubuntu-base AS ubuntu-with-sshd
USER root
# Install required tools.
RUN apt-get -qq update \
&& apt-get -qq --no-install-recommends install vim-tiny=2:8.1.* \
&& apt-get -qq --no-install-recommends install sudo=1.8.* \
&& apt-get -qq --no-install-recommends install python3-pip=20.0.* \
&& apt-get -qq --no-install-recommends install openssh-server=1:8.* \
&& apt-get -qq clean \
&& rm -rf /var/lib/apt/lists/*
# Configure SSHD.
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
RUN mkdir /var/run/sshd
RUN bash -c 'install -m755 <(printf "#!/bin/sh\nexit 0") /usr/sbin/policy-rc.d'
RUN ex +'%s/^#\zeListenAddress/\1/g' -scwq /etc/ssh/sshd_config
RUN ex +'%s/^#\zeHostKey .*ssh_host_.*_key/\1/g' -scwq /etc/ssh/sshd_config
RUN RUNLEVEL=1 dpkg-reconfigure openssh-server
RUN ssh-keygen -A -v
RUN update-rc.d ssh defaults
# Configure sudo.
RUN ex +"%s/^%sudo.*$/%sudo ALL=(ALL:ALL) NOPASSWD:ALL/g" -scwq! /etc/sudoers
# Generate and configure user keys.
USER ubuntu
RUN ssh-keygen -t ed25519 -f ~/.ssh/id_ed25519
#COPY --chown=ubuntu:root "./files/authorized_keys" /home/ubuntu/.ssh/authorized_keys
# Setup default command and/or parameters.
EXPOSE 22
CMD ["/usr/bin/sudo", "/usr/sbin/sshd", "-D", "-o", "ListenAddress=0.0.0.0"]
Build with the following command:
docker build --target ubuntu-with-sshd -t ubuntu-with-sshd .
Then run with:
docker run -p 2222:22 ubuntu-with-sshd
To connect to container via local port, run: ssh -v localhost -p 2222.
To check for container IP address, use docker ps and docker inspect.
Here is example of docker-compose.yml file:
---
version: '3.4'
services:
ubuntu-with-sshd:
image: "ubuntu-with-sshd:latest"
build:
context: "."
target: "ubuntu-with-sshd"
networks:
mynet:
ipv4_address: 172.16.128.2
ports:
- "2222:22"
privileged: true # Required for /usr/sbin/init
networks:
mynet:
ipam:
config:
- subnet: 172.16.128.0/24
To run, type:
docker-compose up --build
I think the correct way to do it would follow docker's instructions to dockerizing the ssh service.
And in correlation to the specific question, the following lines added at the end of the dockerfile will achieve what you were looking for:
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
Dockerize a SSHD service
I have created dockerfiler to run ssh inside. I think it is not secure, but for testing/development in DMZ it could be ok:
FROM ubuntu:20.04
USER root
# change root password to `ubuntu`
RUN echo 'root:ubuntu' | chpasswd
ENV DEBIAN_FRONTEND noninteractive
# install ssh server
RUN apt-get update && apt-get install -y \
openssh-server sudo \
&& rm -rf /var/lib/apt/lists/*
# workdir for ssh
RUN mkdir -p /run/sshd
# generate server keys
RUN ssh-keygen -A
# allow root to login
RUN sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/g' /etc/ssh/sshd_config
EXPOSE 22
# run ssh server
CMD ["/usr/sbin/sshd", "-D", "-o", "ListenAddress=0.0.0.0"]
You can start ssh server when starting your container probably. Something like this:
docker run ubuntu /usr/sbin/sshd -D
Check out this official tutorial.
This is what I did:
FROM nginx
# install gosu
# seealso:
# https://docs.docker.com/develop/develop-images/dockerfile_best-practices/
# https://github.com/tianon/gosu/blob/master/INSTALL.md
# https://github.com/tianon/gosu
RUN set -eux; \
apt-get update; \
apt-get install -y gosu; \
rm -rf /var/lib/apt/lists/*; \
# verify that the binary works
gosu nobody true
ENV myenv='default'
RUN apt-get update && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
COPY entrypoint.sh /entrypoint.sh
ENV AIRFLOW_HOME=/usr/local/airflow
RUN mkdir $AIRFLOW_HOME
RUN groupadd --gid 8080 airflow
RUN useradd --uid 8080 --gid 8080 -ms /bin/bash -d $AIRFLOW_HOME airflow
RUN echo 'airflow:mypass' | chpasswd
EXPOSE 22
CMD ["/entrypoint.sh"]
Inside entrypoint.sh:
echo "starting ssh as root"
gosu root service ssh start &
#gosu root /usr/sbin/sshd -D &
echo "starting tail user"
exec gosu airflow tail -f /dev/null
Well, I used the following command to solve that
docker run -i -t mycentos6 /bin/bash -c '/etc/init.d/sshd start && /bin/bash'
First login to your container and write an initialization script /bin/init as following:
# execute in the container
cat <<EOT >> /bin/init
#!/bin/bash
service ssh start
while true; do sleep 1; done
EOT
Then make the root user is permitted to logging via ssh:
# execute in the container
echo "PermitRootLogin yes" >> /etc/ssh/sshd_config
Commit the container to a new image after exiting from the container:
# execute in the server
docker commit <YOUR_CONTAINER> <ANY_REPO>:<ANY_TAG>
From now on, as long as you run your container with the following command, the ssh service will be automatically started.
# execute in the server
docker run -it -d --name <NAME> <REPO>:<TAG> /bin/init
docker exec -it <NAME> /bin/bash
Done.
You can try a more elegant way to do that with phusion/baseimage-docker
https://github.com/phusion/baseimage-docker#readme
I have a dockerfile:
FROM jenkins:1.651.1
COPY plugins.txt /usr/share/jenkins/plugins.txt
RUN /usr/local/bin/plugins.sh /usr/share/jenkins/plugins.txt
USER root
RUN groupadd docker
RUN usermod -a -G docker jenkins
USER jenkins
I add my user jenkins to the group docker.
When I access my container:
jenkins#bc145b8cfc1d:/$ docker ps
Cannot connect to the Docker daemon. Is the docker daemon running on this host?
jenkins#bc145b8cfc1d:/$ whoami
jenkins
This is the content of my /etc/groupon my container
jenkins:x:1000:
docker:x:1001:jenkins
my jenkins user is in the docker group
jenkins#bc145b8cfc1d:/$ groups jenkins
jenkins : jenkins docker
What am I doing wrong? I want to use docker-commands with my jenkins user. I'm on Amazon EC2 Container Service.
This is how I start a container from my image:
docker run -d -v /var/run/docker.sock:/var/run/docker.sock -v
/usr/bin/docker:/usr/bin/docker:ro -v
/lib64/libdevmapper.so.1.02:/usr/lib/x86_64-linux-gnu/libdevmapper.so.1.02
-v /lib64/libudev.so.0:/usr/lib/x86_64-linux-gnu/libudev.so.0
-p 8080:8080 --name jenkins -u jenkins --privileged=true -t -i
my-jenkins:1.0
This was my 'solution' but it only worked on Ubuntu (not on my centos).
Dockerfile
FROM jenkins:1.651.1
USER root
RUN apt-get update \
&& apt-get install -y apt-transport-https ca-certificates \
&& echo "deb https://apt.dockerproject.org/repo debian-jessie main" > /etc/apt/sources.list.d/docker.list \
&& apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D \
&& apt-get update -y \
&& apt-get install -y docker-engine
RUN gpasswd -a jenkins docker
USER jenkins
Run command:
docker run -d -it -v /var/run/docker.sock:/var/run/docker.sock test-jenkins
On Ubuntu:
jenkins#c73c683b02d7:/$ whoami
jenkins
jenkins#c73c683b02d7:/$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c73c683b02d7 test-jenkins "/bin/tini -- /usr/lo" 2 minutes ago Up 2 minutes 8080/tcp, 50000/tcp
condescending_wing
It has something to do with gid I think:
cat /etc/group in container (on ubuntu and centos).
jenkins:x:1000:
docker:x:999:jenkins
cat /etc/group on Ubuntu (also 999)
docker:x:999:ubuntu
cat /etc/group on Centos (different gid)
docker:x:983:centos
There is probably a solution for this. But I only needed Ubuntu so did not go further in this.
Once your container is running, you can "patch" into the running container using different users using
docker exec -ti -u 0 jenkins bash // root
docker exec -ti -u 1 jenkins bash // probably jenkins
Using the root user, you can su jenkins if you need to switch to the jenkins user from the root user.
If you want to run docker containers inside your existing container (it seems like that is what you're trying), remember to start your docker container with the --privileged flag, eg docker run --privileged ...