could you help me?
I'm trying to run a container by a dockerfile but it shows this
warning and my container does not start.
compose.parallel.parallel_execute_iter: Finished processing:
<Container: remote-Starting remote-host ... done
compose.parallel.feed_queue: Pending: set()
compose.parallel.parallel_execute_iter: Finished processing: <Service:
remote_host>
compose.parallel.feed_queue: Pending: set()
Attaching to jenkinks, remote-host
compose.cli.verbose_proxy.proxy_callable: docker logs <-
('f2e305942e57ce1fe90c2ca94d3d9bbc004155a136594157e41b7a916d1ca7de',
stdout=True, stderr=True, stream=True, follow=True)
remote-host | Unable to load host key: /etc/ssh/ssh_host_rsa_key
remote-host | Unable to load host key: /etc/ssh/ssh_host_ecdsa_key
remote-host | Unable to load host key:
/etc/ssh/ssh_host_ed25519_key remote-host | sshd: no hostkeys
available -- exiting.
compose.cli.verbose_proxy.proxy_callable: docker events <-
(filters={'label': ['com.docker.compose.project=jenkins',
'com.docker.compose.oneoff=False']}, decode=True)
My dockerfile is this:
FROM centos RUN yum -y install openssh-server RUN yum install -y
passwd RUN useradd remote_user &&
echo "1234" | passwd remote_user --stdin &&
mkdir /home/remote_user/.ssh &&
chmod 700 /home/remote_user/.ssh COPY remote_user.pub /home/remote_user/.ssh/authorized_keys RUN chown
remote_user:remote_user -R /home/remote_user &&
chmod 400 /home/remote_user/.ssh/authorized_keys CMD /usr/sbin/sshd -D
start with an empty dir and put following in that dir as a file called Dockerfile
FROM centos
RUN yum -y install openssh-server
RUN yum install -y passwd
RUN useradd remote_user
RUN echo "1234" | passwd remote_user --stdin
RUN mkdir /home/remote_user/.ssh
RUN chmod 700 /home/remote_user/.ssh
COPY remote_user.pub /home/remote_user/.ssh/authorized_keys
RUN chown remote_user:remote_user -R /home/remote_user
RUN chmod 400 /home/remote_user/.ssh/authorized_keys
CMD /usr/sbin/sshd -D
# CMD ["/bin/bash"]
# ... save this file as Dockerfile then in same dir issue following
#
# docker build --tag stens_centos . # creates image stens_ubuntu
#
# docker run -d stens_centos sleep infinity # launches container and just sleeps only purpose here is to keep container running
#
# docker ps # show running containers
#
#
# ... find CONTAINER ID from above and put into something like this
#
# docker exec -ti $( docker ps | grep stens_centos | cut -d' ' -f1 ) bash # login to running container
#
then in that same dir put your ssh key files as per
eve#milan ~/Dropbox/Documents/code/docker/centos $ ls -la
total 28
drwxrwxr-x 2 eve eve 4096 Nov 2 15:20 .
drwx------ 77 eve eve 12288 Nov 2 15:14 ..
-rw-rw-r-- 1 eve eve 875 Nov 2 15:20 Dockerfile
-rwx------ 1 eve eve 3243 Nov 2 15:18 remote_user
-rwx------ 1 eve eve 743 Nov 2 15:18 remote_user.pub
then cat out Dockerfile and copy and paste commands it explains at bottom of Dockerfile file ... for me all of them just worked OK
after I copy and pasted those commands listed at bottom of Dockerfile the container gets built and executed
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0a06ebd2752a stens_centos "sleep infinity" 7 minutes ago Up 7 minutes pedantic_brahmagupta
keep in mind you must define in your Dockerfile the bottom CMD or similar to be just what you want to get executed as the container runs which typically is a server which by definition runs forever ... alternatively this CMD can be simply something which runs then finishes like a batch job in which case the container will exit when that job finishes ... with this knowledge I suggest you confirm sshd -D will hold that command as a server or will immediately terminate upon launch of container
I've just replied to this GitHub issue, but here's what I experienced and how I fixed it
I just had this issue for my Jekyll blog site which I normally bring up using docker-compose with mapped volume to rebuild when I create a new post - it was hanging, tried to run docker-compose up with the --verbose switch and saw the same compose.parallel.feed_queue: Pending: set().
I tried it on my Macbook and it was working fine
I didn't have any experimental features turned on, but I need need to go into (on Windows) settings-> resources -> File Sharing and add the folder I was mapping in my docker compose (the root of my blog site)
Re ran docker compose and its now up and running
Version Info:
Related
Just recently I stumbled on an SSH issue that I cannot figure out what is missing. We use GitLab CI to build and deploy the project to one of our remote servers. As a part of the upgrade plan, we need to replace the degrading Debian 6 server with a new RHEL 7 server. I cannot get the passwordless SSH to work right from GitLab Runner to a remote machine.
I created a reproducible example in a Dockerfile, the IP of the remote server and the user is replaced with non-sensitive data.
FROM centos:7
RUN yum install -y epel-release
RUN yum update -y
RUN yum install -y openssh-clients
RUN useradd -m joe
RUN mkdir -p /home/joe/.ssh
COPY id_rsa_shared /home/joe/.ssh/id_rsa
RUN echo "Host *\n\tStrictHostKeyChecking no\n" >> /home/joe/.ssh/config
RUN ssh-keyscan 10.x.x.x >> /home/joe/.ssh/known_hosts
RUN chown -R joe:joe /home/joe/.ssh
USER joe
CMD ["/bin/bash"]
The file id_rsa_shared is created on local machine with the following command:
ssh-keygen -t rsa -b 2048 -f ./id_rsa_shared
ssh-copy-id -i ./id_rsa_shared joe#10.x.x.x
This works on local. A simple ssh joe#10.x.x.x uname -a in the docker container will output the following:
Linux newweb01p.company.local 3.10.0-1160.25.1.el7.x86_64 #1 SMP Tue Apr 13 18:55:45 EDT 2021 x86_64 x86_64 x86_64 GNU/Linux
However, if I commit this to a branch as GitLab CI script, as shown:
image: centos:7
stages:
- deploy
dev-www:
stage: deploy
tags:
- docker
environment:
name: dev-www
url: http://dev-www.company.local
variables:
DEV_HOST: 10.x.x.x
APP_ENV: dev
DEV_USER: joe
script:
- whoami
- yum install -y epel-release
- yum update -y
- yum install -y openssh-clients
- useradd -m joe
- mkdir -p /home/joe/.ssh
- cp "./gitlab/known_hosts" /home/joe/.ssh/known_hosts
- echo "$DEV_USER_OPENSSH_KEY" >> /home/joe/.ssh/id_rsa
- echo "Host *\n\tStrictHostKeyChecking no\n" >> /home/joe/.ssh/config
- chown -R joe:joe /home/joe/.ssh/
- chmod 600 /home/joe/.ssh/*
- chmod 700 /home/joe/.ssh
- ls -Fsal /home/joe/.ssh
- su - joe
- ssh -oStrictHostKeyChecking=no "${DEV_USER}#${DEV_HOST}" uname -a
when: manual
The pipeline will fail authentication as shown:
Running with gitlab-runner 13.12.0 (7a6612da)
on docker.hqgitrunner01d.company.local K47w1s77
Preparing the "docker" executor
Using Docker executor with image centos:7 ...
Authenticating with credentials from job payload (GitLab Registry)
Pulling docker image centos:7 ...
Using docker image sha256:xxx for centos:7 with digest centos:7#sha256:xxxx ...
Preparing environment
Running on runner-k47w1s77-project-93-concurrent-0 via hqgitrunner01d.company.local...
Getting source from Git repository
Fetching changes...
Reinitialized existing Git repository in /builds/webversion3/API/.git/
Checking out 6a7c193b as tdr/psr4-composer...
Updating/initializing submodules recursively...
Executing "step_script" stage of the job script
Using docker image sha256:xxx for centos:7 with digest centos:7#sha256:xxx ...
$ whoami
root
$ useradd -m joe
$ mkdir -p /home/joe/.ssh
$ cp "./gitlab/known_hosts" /home/joe/.ssh/known_hosts
$ echo "$DEV_USER_OPENSSH_KEY" >> /home/joe/.ssh/id_rsa
$ echo "Host *\n\tStrictHostKeyChecking no\n" >> /home/joe/.ssh/config
$ chown -R joe:joe /home/joe/.ssh/*
$ chmod 600 /home/joe/.ssh/*
$ chmod 700 /home/joe/.ssh
$ ls -Fsal /home/joe/.ssh
total 16
0 drwx------ 2 root root 53 Apr 1 15:19 ./
0 drwx------ 3 joe joe 74 Apr 1 15:19 ../
4 -rw------- 1 joe joe 37 Apr 1 15:19 config
4 -rw------- 1 joe joe 3414 Apr 1 15:19 id_rsa
8 -rw------- 1 joe joe 6241 Apr 1 15:19 known_hosts
$ su - joe
$ ssh -oStrictHostKeyChecking=no "${DEV_USER}#${DEV_HOST}" uname -a
Warning: Permanently added '10.x.x.x' (ECDSA) to the list of known hosts.
Permission denied, please try again.
Permission denied, please try again.
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
Cleaning up file based variables
ERROR: Job failed: exit code 1
Maybe there’s a step I missed because I get a ‘Permission denied, please try again’ message. How do I get Docker Executor to use passwordless SSH to a remote server?
The solution was really simple, and straightforward. The important part is understanding SSH.
The solution works. A snippet from the .gitlab-ci.yml for those who has the same problem as I do.
...
- mkdir -p ~/.ssh
- touch ~/.ssh/id_rsa ~/.ssh/config ~/.ssh/known_hosts
- chmod 600 ~/.ssh/id_rsa ~/.ssh/config ~/.ssh/known_hosts
- echo "$OPENSSH_KEY" >> ~/.ssh/id_rsa
- echo "Host *\n\tStrictHostKeyChecking no" >> ~/.ssh/config
- ssh-keyscan ${DEV_HOST} >> ~/.ssh/known_hosts
Just inline all your ssh options. Use -i to specify your key file. You can also use -o UserKnownHostsFile to specify your known hosts file -- you don't need to copy all that it into an ssh configuration.
This should be enough to ssh successfully:
# ...
- echo "$DEV_USER_OPENSSH_KEY" > "${CI_PROJECT_DIR}/id_rsa.key"
- chmod 600 "${CI_PROJECT_DIR}/id_rsa.key"
- |
ssh -i "${CI_PROJECT_DIR}/id_rsa.key" \
-o IdentitiesOnly=yes \
-o UserKnownHostsFile="${CI_PROJECT_DIR}/gitlab/known_hosts" \
-o StrictHostKeyChecking=no \
user#host ...
Also, since you're disabling StrictHostKeyChecking, you can also just use /dev/null for your UserKnownHostsFile. If you want key checking, omit the StrictHostKeyChecking=no option.
I'm new to Docker so I might not have some of the terminology correct. Inside the container I'm getting a permission denied error on a directory shared with the host. They appear to have matching uid:gid and the permissions host side are 777. The container is not for running in the background.
I'm using the container to run a big series of untrusted programs one at a time each needing the same initial conditions. So I don't think it's feasible to copy stuff into the docker image at build time. I felt the optimal thing to do is copy the programs one at a time to a temp directory on the host and then share that directory with the fresh container for each run. I also need to collect the output from the container-run programs and keep them on the host so I can see how each program's output differs from the others.
I have looked at the following questions/answers:
Docker: Copying files from Docker container to host
How to fix docker: Got permission denied issue - successfully used to make docker run as someone other than root
How do I add a user when I'm using Alpine as a base image? and Setting up a new user - used to create the user and group
I am:
running docker as an ordinary user uid 1000, gid 1000, also belonging to the group docker
setting permissions on the shared directory host side to be 777 with uid:gid as 1000:1000 which is the same as the user
setting the uid and gid inside the container to match uid and gid from the host
using the Dockerfile to create a uid and gid each of 1000
I read here that If the first argument begins with a / or ~/, you’re creating a bindmount. Remove that, and you’re naming the volume. So I tried both. The bindmount version seems to have the correct uid:gid but is permission denied, the volume version comes out as root:root.
As a newbie it's hard to know what information to share so here's everything I think might be useful:
Docker command attempt 1
[osboxes#osboxes tmp]$ pwd
/var/tmp
osboxes#osboxes tmp]$ whoami
osboxes
[osboxes#osboxes tmp]$ grep osboxes /etc/passwd
osboxes:x:1000:1000:osboxes.org:/home/osboxes:/bin/bash
[osboxes#osboxes tmp]$ groups
osboxes wheel vboxsf docker
[osboxes#osboxes tmp]$ grep osboxes /etc/group
wheel:x:10:osboxes
osboxes:x:1000:osboxes
vboxsf:x:981:osboxes
docker:x:1001:osboxes
[osboxes#osboxes tmp]$ ls -al
total 2
drwxrwxrwt. 11 root root 4096 Dec 31 12:13 .
drwxr-xr-x. 21 root root 4096 Jul 5 05:00 ..
drwxr-xr-x. 2 abrt abrt 6 Jul 5 05:00 abrt
drwxrwxrwx. 2 osboxes osboxes 6 Dec 31 12:13 host
continues...
[osboxes#osboxes tmp]$ docker run --rm -v /var/tmp/host:/var/tmp/container:rw \
--user appuser:appgroup --workdir /var/tmp/container \
-it alpine_bash_jdk11 /bin/bash
bash-5.0$ pwd
/var/tmp/container
bash-5.0$ ls -al
ls: can't open '.': Permission denied
total 0
bash-5.0$ ls -al ..
total 0
drwxrwxrwt 1 root root 23 Dec 31 12:51 .
drwxr-xr-x 1 root root 17 Dec 16 10:31 ..
drwxrwxrwx 2 appuser appgroup 6 Dec 31 12:13 container
bash-5.0$ whoami
appuser
bash-5.0$ groups
appgroup
bash-5.0$ grep appuser /etc/passwd
appuser:x:1000:1000:Linux User,,,:/home/appuser:/sbin/nologin
bash-5.0$ grep appuser /etc/group
appgroup:x:1000:appuser
Docker command attempt 2
everything as before except
for removing the qualified path to the host's
/var/tmp/host directory
docker run --rm -v host:/var/tmp/container:rw \
--user appuser:appgroup --workdir /var/tmp/container \
-it alpine_bash_jdk11 /bin/bash
bash-5.0$ pwd
/var/tmp/container
bash-5.0$ ls -al
total 0
drwxr-xr-x 2 root root 6 Dec 31 12:13 .
drwxrwxrwt 1 root root 23 Dec 31 13:03 ..
bash-5.0$ ls -al ..
total 0
drwxrwxrwt 1 root root 23 Dec 31 13:03 .
drwxr-xr-x 1 root root 17 Dec 16 10:31 ..
drwxr-xr-x 2 root root 6 Dec 31 12:13 container
bash-5.0$ whoami
appuser
bash-5.0$ groups
appgroup
bash-5.0$ echo hello from contanier > container.msg.txt
bash: container.msg.txt: Permission denied
Docker build command
as user osboxes
docker build -t alpine_bash_jdk11 .
Dockerfile
FROM alpine:latest
RUN apk --no-cache update
RUN apk add --no-cache bash
RUN apk --no-cache add openjdk11 --repository=http://dl-cdn.alpinelinux.org/alpine/edge/community
ENV JAVA_HOME="/usr/lib/jvm/default-jvm"
ENV PATH=$PATH:${JAVA_HOME}/bin
RUN addgroup -g 1000 -S appgroup && adduser -S appuser -G appgroup -u 1000
USER appuser
I haven't used docker compose because I'm still getting my head round basic docker.
Virtual Machine which is the Docker Host
CentOS 7.2003 from osboxes.org, organization's decision, not mine
Linux osboxes 3.10.0-1160.11.1.el7.x86_64 #1 SMP Fri Dec 18 16:34:56 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
I did a yum update, then yum installed all the stuff needed to install VirtualBox guest additions which is working ok
Docker version 1.13.1, build 0be3e21/1.13.1
Physical Host
Windows 10 64-bit
VirtualBox 6.1.4r136177
both these are the organization's decisions
tl;dr: had old version of docker due to wrong install command
The answer: install docker-ce instead of docker. Depending on your system that might be
sudo apt-get install -y docker-ce
or
sudo yum -y install docker-ce
instead of sudo apt-get install -y docker
or
sudo yum -y install docker
Solution: update docker
Having found this article I could see that I had the wrong version of docker. I justifiably thought the correct command was
sudo yum install -y docker
but it should have been docker-ce
I had to yum erase -y docker docker-common
Now I have Docker version 20.10.1, build 831ebea
I am trying to create a Docker container with a custom D-Bus bus running inside.
I configured my Dockerfile as follow:
FROM ubuntu:16.04
COPY myCustomDbus.conf /etc/dbus-1/
RUN apt-get update && apt-get install -y dbus
RUN dbus-daemon --config-file=/etc/dbus-1/myCustomDbus.conf
After building, the socket is created but it is flagged as "file", not as "socket", and I can not use it as a bus...
-rwxrwxrwx 1 root root 0 Mar 20 07:25 myCustomDbus.sock
If I remove this file and run the dbus-daemon command again in a terminal, the socket is successfully created :
srwxrwxrwx 1 root root 0 Mar 20 07:35 myCustomDbus.sock
I am not sure if it is a D-Bus problem or a docker one.
Instead of using the "RUN" command, you should use the "ENTRYPOINT" one to run a startup script.
The Dockerfile should look like that :
FROM ubuntu:14.04
COPY myCustomDbus.conf /etc/dbus-1/
COPY run.sh /etc/init/
RUN apt-get update && apt-get install -y dbus
ENTRYPOINT ["/etc/init/run.sh"]
And run.sh :
#!/bin/bash
dbus-daemon --config-file=/etc/dbus-1/myCustomDbus.conf --print-address
You should use a startup script. The "run" command is executed only when the container is created and then stopped.
my run.sh:
if ! pgrep -x "dbus-daemon" > /dev/null
then
# export DBUS_SESSION_BUS_ADDRESS=$(dbus-daemon --config-file=/usr/share/dbus-1/system.conf --print-address | cut -d, -f1)
# or:
dbus-daemon --config-file=/usr/share/dbus-1/system.conf
# and put in Dockerfile:
# ENV DBUS_SESSION_BUS_ADDRESS="unix:path=/var/run/dbus/system_bus_socket"
else
echo "dbus-daemon already running"
fi
if ! pgrep -x "/usr/lib/upower/upowerd" > /dev/null
then
/usr/lib/upower/upowerd &
else
echo "upowerd already running"
fi
then chrome runs with
--use-gl=swiftshader
without errors
I use supervisor to run cron and nginx, the problem is when i try to COPY or VOLUME mount my cron files, it does not run my cron files in /etc/cron.d
But when I exec -it <container_id> bash into the container and create the exact same cron file from inside, it is immediately recognized and runs as it should.
Dockerfile :
FROM phusion/baseimage:latest
ENV TERM xterm
ENV HOME /root
RUN apt-get update && apt-get install -y \
nginx \
supervisor \
curl \
nano \
net-tools
RUN rm -rf /etc/nginx/*
COPY nginx_conf /etc/nginx
COPY supervisor_conf /etc/supervisor/
RUN mkdir -p /var/log/supervisor
COPY crontabs /etc/cron.d/
RUN chmod -R 644 /etc/cron.d/
CMD /usr/bin/supervisord
The cron itself
* * * * * root curl --silent http://127.0.0.1/cronjob/cron_test_docker.php >> /var/www/html/log/docker_test.log 2>&1
cron and nginx run through supervisor
[supervisord]
nodaemon = true
[program:nginx]
command = /usr/sbin/nginx -g "daemon off;"
autostart = true
[program:cron]
command = /usr/sbin/cron -f
autostart = true
The logs inside /var/log/supervisor/ relating to cron for stdout and stderr are empty.
I also tried stripping out supervisor and running cron on its own through phusion and CMD cron -f but got the same issue of it not working when the source is external(COPY or VOLUME) and magically works when created inside the container.
Initially believed it to be a permissions issue and tried chmod 644 (as this was the permission a file created in the container had) on all files that were the result of COPY into.
RUN chmod 644 /etc/cron.d/
After which tried every possible combination of permissions with rwx to no avail.
Also, tried to append the line of the cronjob into /etc/crontab but it is not recognized in crontab -l.
COPY crontab /tmp/crontab
RUN cat crontab >> /etc/crontab
It would be really handy if it worked just when it was created through COPY or VOLUME as it is a hassle to create it manually in the container everytime.
Any help would be greatly appreciated!
Edit 1 :
Some additional information about the file permissions after COPY or VOLUME.
When I perform
COPY crontabs /etc/cron.d/
RUN chmod -R 644 /etc/cron.d/
Inside the container running ls -l inside /etc/cron.d/ shows
-rw-r--r-- 1 root root 118 Jul 20 11:03 wwwcron-cron-docker_test
When I mount the folder through my docker-compose through VOLUME
volumes:
- ./server/crontabs:/etc/cron.d
ls -l shows
-rwxrwxrwx 1 1000 staff 118 Jul 20 11:03 wwwcron-cron-docker_test
In addition if I manually create the cron file in the container it looks like this and this works
-rw-r--r-- 1 root root 118 Jul 22 15:50 wwwcron-cron-docker_test_inside_docker
Clearly there are very different permissions and ownership when making COPY or VOLUME. But making a COPY with exact permissions does not work but seems to work when created in the container.
Thanks to #BMitch was able to find the issue which was related to line endings since my host machine was windows and the cron file origin was windows as well there was a disparity in the line endings thereby cron did not pick it up automatically.
I added this line to my Dockerfile and it works like a charm
RUN find /etc/cron.d/ -type f -print0 | xargs -0 dos2unix
And iterating on that the size of the file is indeed 1 byte smaller when a dos2unix is run, so you can verify if this operation indeed occurred.
-rw-r--r-- 1 root root 117 Jul 25 08:33 wwwcron-cron-docker_test
Have you tried installing the crontab as a separate command in the Dockerfile?
i.e.
...
COPY crontabs /path/to/crontab.txt
RUN crontab -u myUser /path/to/crontab.txt
...
I'm running Celery under Docker Compose. I'd like to make Celery's Flower persistent. So I do:
version: '2'
volumes:
[...]
flower_data: {}
[...]
flower:
image: [base code image]
ports:
- "5555:5555"
volumes:
- flower_data:/flower
command:
celery -A proj flower --port=5555 --persistent=True --db=/flower/flower
However, then I get:
IOError: [Errno 13] Permission denied: 'flower.dat'
I ran the following to elucidate why:
bash -c "ls -al /flower; whoami; celery -A proj flower --persistent=True --db=/flower/flower"
This made it clear why:
flower_1 | drwxr-xr-x 3 root root 4096 Mar 10 23:05 .
flower_1 | drwxr-xr-x 7 root root 4096 Mar 10 23:05 ..
Namely, the directory is mounted as root, yet in [base code image] I ensure the user running is not root, as per Celery's docks to never run as root:
FROM python:2.7
...
RUN groupadd user && useradd --create-home --home-dir /usrc/src/app -g user user
USER user
What would be the best way for Celery Flower to continue to run not as root, yet be able to use this named volume?
The following works: In the Dockerfile, install sudo and add user to the sudo group, requiring a password:
RUN apt-get update
RUN apt-get -y install sudo
RUN echo "user:SECRET" | chpasswd && adduser user sudo
Then, in the Docker Compose config, the command will be:
bash -c "echo SECRET | sudo -S chown user:user /flower; celery -A proj flower --power=5555 --persistent --db=/flower/flower"
I'm not sure if this is the best way, though, or what the security implications of this are.