I'm running Celery under Docker Compose. I'd like to make Celery's Flower persistent. So I do:
version: '2'
volumes:
[...]
flower_data: {}
[...]
flower:
image: [base code image]
ports:
- "5555:5555"
volumes:
- flower_data:/flower
command:
celery -A proj flower --port=5555 --persistent=True --db=/flower/flower
However, then I get:
IOError: [Errno 13] Permission denied: 'flower.dat'
I ran the following to elucidate why:
bash -c "ls -al /flower; whoami; celery -A proj flower --persistent=True --db=/flower/flower"
This made it clear why:
flower_1 | drwxr-xr-x 3 root root 4096 Mar 10 23:05 .
flower_1 | drwxr-xr-x 7 root root 4096 Mar 10 23:05 ..
Namely, the directory is mounted as root, yet in [base code image] I ensure the user running is not root, as per Celery's docks to never run as root:
FROM python:2.7
...
RUN groupadd user && useradd --create-home --home-dir /usrc/src/app -g user user
USER user
What would be the best way for Celery Flower to continue to run not as root, yet be able to use this named volume?
The following works: In the Dockerfile, install sudo and add user to the sudo group, requiring a password:
RUN apt-get update
RUN apt-get -y install sudo
RUN echo "user:SECRET" | chpasswd && adduser user sudo
Then, in the Docker Compose config, the command will be:
bash -c "echo SECRET | sudo -S chown user:user /flower; celery -A proj flower --power=5555 --persistent --db=/flower/flower"
I'm not sure if this is the best way, though, or what the security implications of this are.
Related
Just recently I stumbled on an SSH issue that I cannot figure out what is missing. We use GitLab CI to build and deploy the project to one of our remote servers. As a part of the upgrade plan, we need to replace the degrading Debian 6 server with a new RHEL 7 server. I cannot get the passwordless SSH to work right from GitLab Runner to a remote machine.
I created a reproducible example in a Dockerfile, the IP of the remote server and the user is replaced with non-sensitive data.
FROM centos:7
RUN yum install -y epel-release
RUN yum update -y
RUN yum install -y openssh-clients
RUN useradd -m joe
RUN mkdir -p /home/joe/.ssh
COPY id_rsa_shared /home/joe/.ssh/id_rsa
RUN echo "Host *\n\tStrictHostKeyChecking no\n" >> /home/joe/.ssh/config
RUN ssh-keyscan 10.x.x.x >> /home/joe/.ssh/known_hosts
RUN chown -R joe:joe /home/joe/.ssh
USER joe
CMD ["/bin/bash"]
The file id_rsa_shared is created on local machine with the following command:
ssh-keygen -t rsa -b 2048 -f ./id_rsa_shared
ssh-copy-id -i ./id_rsa_shared joe#10.x.x.x
This works on local. A simple ssh joe#10.x.x.x uname -a in the docker container will output the following:
Linux newweb01p.company.local 3.10.0-1160.25.1.el7.x86_64 #1 SMP Tue Apr 13 18:55:45 EDT 2021 x86_64 x86_64 x86_64 GNU/Linux
However, if I commit this to a branch as GitLab CI script, as shown:
image: centos:7
stages:
- deploy
dev-www:
stage: deploy
tags:
- docker
environment:
name: dev-www
url: http://dev-www.company.local
variables:
DEV_HOST: 10.x.x.x
APP_ENV: dev
DEV_USER: joe
script:
- whoami
- yum install -y epel-release
- yum update -y
- yum install -y openssh-clients
- useradd -m joe
- mkdir -p /home/joe/.ssh
- cp "./gitlab/known_hosts" /home/joe/.ssh/known_hosts
- echo "$DEV_USER_OPENSSH_KEY" >> /home/joe/.ssh/id_rsa
- echo "Host *\n\tStrictHostKeyChecking no\n" >> /home/joe/.ssh/config
- chown -R joe:joe /home/joe/.ssh/
- chmod 600 /home/joe/.ssh/*
- chmod 700 /home/joe/.ssh
- ls -Fsal /home/joe/.ssh
- su - joe
- ssh -oStrictHostKeyChecking=no "${DEV_USER}#${DEV_HOST}" uname -a
when: manual
The pipeline will fail authentication as shown:
Running with gitlab-runner 13.12.0 (7a6612da)
on docker.hqgitrunner01d.company.local K47w1s77
Preparing the "docker" executor
Using Docker executor with image centos:7 ...
Authenticating with credentials from job payload (GitLab Registry)
Pulling docker image centos:7 ...
Using docker image sha256:xxx for centos:7 with digest centos:7#sha256:xxxx ...
Preparing environment
Running on runner-k47w1s77-project-93-concurrent-0 via hqgitrunner01d.company.local...
Getting source from Git repository
Fetching changes...
Reinitialized existing Git repository in /builds/webversion3/API/.git/
Checking out 6a7c193b as tdr/psr4-composer...
Updating/initializing submodules recursively...
Executing "step_script" stage of the job script
Using docker image sha256:xxx for centos:7 with digest centos:7#sha256:xxx ...
$ whoami
root
$ useradd -m joe
$ mkdir -p /home/joe/.ssh
$ cp "./gitlab/known_hosts" /home/joe/.ssh/known_hosts
$ echo "$DEV_USER_OPENSSH_KEY" >> /home/joe/.ssh/id_rsa
$ echo "Host *\n\tStrictHostKeyChecking no\n" >> /home/joe/.ssh/config
$ chown -R joe:joe /home/joe/.ssh/*
$ chmod 600 /home/joe/.ssh/*
$ chmod 700 /home/joe/.ssh
$ ls -Fsal /home/joe/.ssh
total 16
0 drwx------ 2 root root 53 Apr 1 15:19 ./
0 drwx------ 3 joe joe 74 Apr 1 15:19 ../
4 -rw------- 1 joe joe 37 Apr 1 15:19 config
4 -rw------- 1 joe joe 3414 Apr 1 15:19 id_rsa
8 -rw------- 1 joe joe 6241 Apr 1 15:19 known_hosts
$ su - joe
$ ssh -oStrictHostKeyChecking=no "${DEV_USER}#${DEV_HOST}" uname -a
Warning: Permanently added '10.x.x.x' (ECDSA) to the list of known hosts.
Permission denied, please try again.
Permission denied, please try again.
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
Cleaning up file based variables
ERROR: Job failed: exit code 1
Maybe there’s a step I missed because I get a ‘Permission denied, please try again’ message. How do I get Docker Executor to use passwordless SSH to a remote server?
The solution was really simple, and straightforward. The important part is understanding SSH.
The solution works. A snippet from the .gitlab-ci.yml for those who has the same problem as I do.
...
- mkdir -p ~/.ssh
- touch ~/.ssh/id_rsa ~/.ssh/config ~/.ssh/known_hosts
- chmod 600 ~/.ssh/id_rsa ~/.ssh/config ~/.ssh/known_hosts
- echo "$OPENSSH_KEY" >> ~/.ssh/id_rsa
- echo "Host *\n\tStrictHostKeyChecking no" >> ~/.ssh/config
- ssh-keyscan ${DEV_HOST} >> ~/.ssh/known_hosts
Just inline all your ssh options. Use -i to specify your key file. You can also use -o UserKnownHostsFile to specify your known hosts file -- you don't need to copy all that it into an ssh configuration.
This should be enough to ssh successfully:
# ...
- echo "$DEV_USER_OPENSSH_KEY" > "${CI_PROJECT_DIR}/id_rsa.key"
- chmod 600 "${CI_PROJECT_DIR}/id_rsa.key"
- |
ssh -i "${CI_PROJECT_DIR}/id_rsa.key" \
-o IdentitiesOnly=yes \
-o UserKnownHostsFile="${CI_PROJECT_DIR}/gitlab/known_hosts" \
-o StrictHostKeyChecking=no \
user#host ...
Also, since you're disabling StrictHostKeyChecking, you can also just use /dev/null for your UserKnownHostsFile. If you want key checking, omit the StrictHostKeyChecking=no option.
Here, I deployed 2 containers with --scale flag
docker-compose up -d --scale gitlab-runner=2
2.Two containers are being deployed with names scalecontainer_gitlab-runner_1 and scalecontainer_gitlab-runner_2 resp.
I want to map different volume for each container.
/srv/gitlab-runner/config_${DOCKER_SCALE_NUM}:/etc/gitlab-runner
Getting this error:
WARNING: The DOCKER_SCALE_NUM variable is not set. Defaulting to a blank string.
Is there any way, I can map different volume for separate container .
services:
gitlab-runner:
image: "gitlab/gitlab-runner:latest"
restart: unless-stopped
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
- /srv/gitlab-runner/config_${DOCKER_SCALE_NUM}:/etc/gitlab-runner
version: "3.5"
I don't think you can, there's an open request on this here. Here I will try to describe an alternative method for getting what you want.
Try creating a symbolic link from within the container that links to the directory you want. You can determine the "number" of the container after it's constructed by reading the container name from docker API and taking the final segment. To do this you have to mount the docker socket into the container, which has big security implications.
Setup
Here is a simple script to get the number of the container (Credit Tony Guo).
get-name.sh
DOCKERINFO=$(curl -s --unix-socket /run/docker.sock http://docker/containers/$HOSTNAME/json)
ID=$(python3 -c "import sys, json; print(json.loads(sys.argv[1])[\"Name\"].split(\"_\")[-1])" "$DOCKERINFO")
echo "$ID"
Then we have a simple entrypoint file which gets the container number, creates the specific config directory if it doesn't exist, and links its specific config directory to a known location (/etc/config in this example).
entrypoint.sh
#!/bin/sh
# Get the number of this container
NAME=$(get-name)
CONFIG_DIR="/config/config_${NAME}"
# Create a config dir for this container if none exists
mkdir -p "$CONFIG_DIR"
# Create a sym link from a well known location to our individual config dir
ln -s "$CONFIG_DIR" /etc/config
exec "$#"
Next we have a Dockerfile to build our image, we need to set the entrypoint and install curl and python for it to work. Also copy in our get-name.sh script.
Dockerfile
FROM alpine
COPY entrypoint.sh entrypoint.sh
COPY get-name.sh /usr/bin/get-name
RUN apk update && \
apk add \
curl \
python3 \
&& \
chmod +x entrypoint.sh /usr/bin/get-name
ENTRYPOINT ["/entrypoint.sh"]
Last, a simple compose file that specifies our service. Note that the docker socket is mounted, as well as ./config which is where our different config directories go.
docker-compose.yml
version: '3'
services:
app:
build: .
command: tail -f
volumes:
- /run/docker.sock:/run/docker.sock:ro
- ./config:/config
Example
# Start the stack
$ docker-compose up -d --scale app=3
Starting volume-per-scaled-container_app_1 ... done
Starting volume-per-scaled-container_app_2 ... done
Creating volume-per-scaled-container_app_3 ... done
# Check config directory on our host, 3 new directories were created.
$ ls config/
config_1 config_2 config_3
# Check the /etc/config directory in container 1, see that it links to the config_1 directory
$ docker exec volume-per-scaled-container_app_1 ls -l /etc/config
lrwxrwxrwx 1 root root 16 Jan 13 00:01 /etc/config -> /config/config_1
# Container 2
$ docker exec volume-per-scaled-container_app_2 ls -l /etc/config
lrwxrwxrwx 1 root root 16 Jan 13 00:01 /etc/config -> /config/config_2
# Container 3
$ docker exec volume-per-scaled-container_app_3 ls -l /etc/config
lrwxrwxrwx 1 root root 16 Jan 13 00:01 /etc/config -> /config/config_3
Notes
I think gitlab/gitlab-runner has its own entrypoint file so you may need to chain them.
You'll need to adapt this example to your specific setup/locations.
could you help me?
I'm trying to run a container by a dockerfile but it shows this
warning and my container does not start.
compose.parallel.parallel_execute_iter: Finished processing:
<Container: remote-Starting remote-host ... done
compose.parallel.feed_queue: Pending: set()
compose.parallel.parallel_execute_iter: Finished processing: <Service:
remote_host>
compose.parallel.feed_queue: Pending: set()
Attaching to jenkinks, remote-host
compose.cli.verbose_proxy.proxy_callable: docker logs <-
('f2e305942e57ce1fe90c2ca94d3d9bbc004155a136594157e41b7a916d1ca7de',
stdout=True, stderr=True, stream=True, follow=True)
remote-host | Unable to load host key: /etc/ssh/ssh_host_rsa_key
remote-host | Unable to load host key: /etc/ssh/ssh_host_ecdsa_key
remote-host | Unable to load host key:
/etc/ssh/ssh_host_ed25519_key remote-host | sshd: no hostkeys
available -- exiting.
compose.cli.verbose_proxy.proxy_callable: docker events <-
(filters={'label': ['com.docker.compose.project=jenkins',
'com.docker.compose.oneoff=False']}, decode=True)
My dockerfile is this:
FROM centos RUN yum -y install openssh-server RUN yum install -y
passwd RUN useradd remote_user &&
echo "1234" | passwd remote_user --stdin &&
mkdir /home/remote_user/.ssh &&
chmod 700 /home/remote_user/.ssh COPY remote_user.pub /home/remote_user/.ssh/authorized_keys RUN chown
remote_user:remote_user -R /home/remote_user &&
chmod 400 /home/remote_user/.ssh/authorized_keys CMD /usr/sbin/sshd -D
start with an empty dir and put following in that dir as a file called Dockerfile
FROM centos
RUN yum -y install openssh-server
RUN yum install -y passwd
RUN useradd remote_user
RUN echo "1234" | passwd remote_user --stdin
RUN mkdir /home/remote_user/.ssh
RUN chmod 700 /home/remote_user/.ssh
COPY remote_user.pub /home/remote_user/.ssh/authorized_keys
RUN chown remote_user:remote_user -R /home/remote_user
RUN chmod 400 /home/remote_user/.ssh/authorized_keys
CMD /usr/sbin/sshd -D
# CMD ["/bin/bash"]
# ... save this file as Dockerfile then in same dir issue following
#
# docker build --tag stens_centos . # creates image stens_ubuntu
#
# docker run -d stens_centos sleep infinity # launches container and just sleeps only purpose here is to keep container running
#
# docker ps # show running containers
#
#
# ... find CONTAINER ID from above and put into something like this
#
# docker exec -ti $( docker ps | grep stens_centos | cut -d' ' -f1 ) bash # login to running container
#
then in that same dir put your ssh key files as per
eve#milan ~/Dropbox/Documents/code/docker/centos $ ls -la
total 28
drwxrwxr-x 2 eve eve 4096 Nov 2 15:20 .
drwx------ 77 eve eve 12288 Nov 2 15:14 ..
-rw-rw-r-- 1 eve eve 875 Nov 2 15:20 Dockerfile
-rwx------ 1 eve eve 3243 Nov 2 15:18 remote_user
-rwx------ 1 eve eve 743 Nov 2 15:18 remote_user.pub
then cat out Dockerfile and copy and paste commands it explains at bottom of Dockerfile file ... for me all of them just worked OK
after I copy and pasted those commands listed at bottom of Dockerfile the container gets built and executed
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0a06ebd2752a stens_centos "sleep infinity" 7 minutes ago Up 7 minutes pedantic_brahmagupta
keep in mind you must define in your Dockerfile the bottom CMD or similar to be just what you want to get executed as the container runs which typically is a server which by definition runs forever ... alternatively this CMD can be simply something which runs then finishes like a batch job in which case the container will exit when that job finishes ... with this knowledge I suggest you confirm sshd -D will hold that command as a server or will immediately terminate upon launch of container
I've just replied to this GitHub issue, but here's what I experienced and how I fixed it
I just had this issue for my Jekyll blog site which I normally bring up using docker-compose with mapped volume to rebuild when I create a new post - it was hanging, tried to run docker-compose up with the --verbose switch and saw the same compose.parallel.feed_queue: Pending: set().
I tried it on my Macbook and it was working fine
I didn't have any experimental features turned on, but I need need to go into (on Windows) settings-> resources -> File Sharing and add the folder I was mapping in my docker compose (the root of my blog site)
Re ran docker compose and its now up and running
Version Info:
For my job, I would like to run Jenkins and Docker Rootless (with the sysbox runtime only for this container), all in Docker Rootless.
I would like this because I need a secure environment given I don't inspect Jenkins pipelines
But when I run docker rootless in docker rootless, I get this error:
[rootlesskit:parent] error: failed to setup UID/GID map: newuidmap 54 [0 1000 1 1 100000 65536] failed: newuidmap: write to uid_map failed: Operation not permitted
: exit status 1
I tried many actions but failed to get it to work. Someone would have a solution to do this, please?
Thanks for reading me, have a nice day!
Edit 1
Hello, I take the liberty of relaunching this question, because being essential for the safety of our environment, my bosses remind me every day. Would someone have the answer to this problem please
Things getting a little tricky when you want to use the docker build command inside a Jenkins container.
I stumbled upon this issue when wanted to build docker images without being root, under the user 'jenkins' instead.
I wrote the solution in an article in which I explain in detail what is happening under the hood.
The key point is to figure out which GID the docker.sock socket is running under (depends on the system). So here is what you gotta do:
Run the command:
$ stat /var/run/docker.sock
Output:
jenkins#wsl-ubuntu:~$ stat /var/run/docker.sock
File: /var/run/docker.sock
Size: 0 Blocks: 0 IO Block: 4096 socket
Device: 17h/23d Inode: 552 Links: 1
Access: (0660/srw-rw----) Uid: ( 0/ root) Gid: ( 1001/ docker)
Access: 2021-03-03 10:43:05.570000000 +0200
Modify: 2021-03-03 10:43:05.570000000 +0200
Change: 2021-03-03 10:43:05.570000000 +0200
Birth: -
In this case, the GID is 1001, but can also be 999 or something else in your machine.
Now, create a Dockerfile and paste the code below replacing the ENV variable with your own from the stat command output above:
FROM jenkins/jenkins:lts-alpine
USER root
ARG DOCKER_HOST_GID=1001 #Replace with your own docker.sock GID
ARG JAVA_OPTS=""
ENV DOCKER_HOST_GID $DOCKER_HOST_GID
ENV JAVA_OPTS $JAVA_OPTS
RUN set -eux \
&& apk --no-cache update \
&& apk --no-cache upgrade --available \
&& apk --no-cache add shadow \
&& apk --no-cache add docker curl --repository http://dl-cdn.alpinelinux.org/alpine/latest-stable/community \
&& deluser --remove-home jenkins \
&& addgroup -S jenkins -g $DOCKER_HOST_GID \
&& adduser -S -G jenkins -u $DOCKER_HOST_GID jenkins \
&& usermod -aG docker jenkins \
&& apk del shadow curl
USER jenkins
WORKDIR $JENKINS_HOME
For the sake of a working example, here is a docker-compose file:
version: '3.3'
services:
jenkins:
image: jenkins_master
container_name: jenkins_master
hostname: jenkins_master
restart: unless-stopped
env_file:
- jenkins.env
build:
context: .
cpus: 2
mem_limit: 1024m
mem_reservation: 800M
ports:
- 8090:8080
- 50010:50000
- 2375:2376
volumes:
- ./jenkins_data:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock
networks:
- default
volumes:
jenkins_data: {}
networks:
default:
driver: bridge
Now lets create the ENV variables:
cat > jenkins.env <<EOF
DOCKER_HOST_GID=1001 #Replace with your own docker.sock GID
JAVA_OPTS=-Dhudson.slaves.NodeProvisioner.MARGIN0=0.85
EOF
and lastly, run the command docker-compose up -d.
It will build the image, and run it.
Then visit HTTP://host_machine_ip:8090 , and that's all.
If you run docker inspect --format '{{ index (index .Config.Env) }}' jenkins_master you will see that the 1st and 2nd variables are the ones we set.
More details can be found here: How to run rootless docker in dockerized Jenkins installation
I am new with the docker and I am trying to do a simple setup, which the scope is:
create a home folder and give appropriate permissions
On host side:
I have a user called devel which I put into the docker group.
When I run 'groups devel' I get back the group docker. UID 1000 and GID 1000.
my subuid file:
devel:1000:1
devel:100000:65536
my subgid file:
devel:1000:1
devel:100000:65536
following a tutorial and setting the sysconfig file to start with that 'devel' as option for remapping.
I then created this Dockerfile:
USER root
RUN groupadd -g 1000 devel
#Create the user with home directory
RUN useradd -d /var/opt/devel -u 1000 -g 1000 --shell /bin/bash devel
#Just for being very-very-very-very sure:
RUN chown -vhR devel:devel /var/opt/devel
#test with ls
RUN ls -ltr /var/opt/
User deploy
#test again by creating a file:
RUN touch /var/opt/devel/TEST.txt
RUN ls -ltr /var/opt/devel/TEST.txt
USER root
RUN ls -ltr /var/opt
USER devel
CMD ["/bin/bash"]
The result is that the directory which is created has group "devel" but the user owner is always the root.
I have disabled after 12 hours checking why the SELINUX for another reason (the reason was that id did not let me to use chown at all) but now I am stack and I know what else magic do I need to do.
The docker version is 18.09.1-ol ( oracle linux 7)
Hope someone has an idea.
Thanks