I have a docker container built from the following image : FROM debian:9.11-slim
I try to install rust using the following line in my Dockerfile and it works fine until the last line. I get a permission denied error whenever I try to run /rust/cargo. However, if I connect to the container and run it from there via the command line it works. However, I need to be able to run rust/cargo commands from the docker file. Any help?
ENV RUSTUP_HOME=/rust/rustup
ENV CARGO_HOME=/rust/cargo
RUN set -eux; \
url="https://raw.githubusercontent.com/rust-lang/rustup/1.22.1/rustup-init.sh"; \
wget -O rustup-init.sh "$url"; \
echo "b273275cf4d83cb6b991c1090baeca54 rustup-init.sh" | md5sum -c -; \
echo "8928261388c8fae83bfd79b08d9030dfe21d17a8b59e9dcabda779213f6a3d14 rustup- init.sh" | sha256sum -c -; \
bash ./rustup-init.sh --profile=minimal -y -t thumbv7em-none-eabihf; \
rm rustup-init.sh; \
chmod -R go+rwX /rust; \
/rust/cargo --version
The problem is chmod -R go+rwX
How to reproduce:
We have file:
#!/bin/bash
echo good
~ $ ls -l file
total 0
-rw-r--r-- 1 user staff 0 Jun 30 11:49 file
~ $ ./file
-bash: ./file: Permission denied
~ $ chmod go+rwX file
~ $ ls -l file
-rw-rw-rw- 1 user staff 23 Jun 30 11:50 file
~ $ ./file
-bash: ./file: Permission denied
As you can see -rw-rw-rw- permissions don't allow to execute file
Solution is to use something of below:
chmod -R ug+rwx /rust (add all permissions to user and group)
chmod -R ugo+rwx /rust (add all permissions to all users)
chmod -R 777 /rust (add all permissions to all users (same as ugo+rwx))
chmod -R 755 /rust (add execution permissions to all users)
chmod 755 /rust/cargo (add execution permissions to all users only for execution file)
[if permissions already correct] don't set permissions at all (remove chmod -R go+rwX /rust) ← Best way
I faced a similar issue but in a slightly different situation. I was using docker-compose pipeline in GitHub actions on EC2 Self-Hosted Runner, based on the native GitHub pipeline. I didn't remove the Rust toolchain installation, which caused reinstallation of cargo in every build on EC2 instance changing permissions and sourcing binaries from cargo source ~/.cargo/env, hence the permission error on the default system user.
In my case, the solution was simply removing the installation of Rust from the workflow.yml and sourcing the system rust source ~/.bashrc.
Related
I have a pod and NodePort service running on GKE.
In the Dockerfile for the container in my pod, I'm using gosu to run a command as a specific user:
startup.sh
exec /usr/local/bin/gosu mytestuser "$#"
Dockerfile
FROM ${DOCKER_HUB_PUBLIC}/opensuse/leap:latest
# Download and verify gosu
RUN gpg --batch --keyserver-options http-proxy=${env.HTTP_PROXY} --keyserver hkps://keys.openpgp.org \
--recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4 && \
curl -o /usr/local/bin/gosu -SL "https://github.com/tianon/gosu/releases/download/1.12/gosu-amd64" && \
curl -o /usr/local/bin/gosu.asc -SL "https://github.com/tianon/gosu/releases/download/1.12/gosu-amd64.asc" && \
gpg --batch --verify /usr/local/bin/gosu.asc /usr/local/bin/gosu && \
chmod +x /usr/local/bin/gosu
# Add tini
ENV TINI_VERSION v0.18.0
ADD https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini /tini
RUN chmod +x /tini
ENTRYPOINT ["/tini", "--", "/startup/startup.sh"]
# Add mytestuser
RUN useradd mytestuser
# Run startup.sh which will use gosu to execute following `CMD` as `mytestuser`
RUN /startup/startup.sh
CMD ["java", "-Djava.security.egd=file:/dev/./urandom", "-jar", "/helloworld.jar"]
I've just noticed that when I log into the container on GKE and look at the processes running, the java process that I would expect to be running as mytestuser is actually running as chronos:
me#gke-cluster-1-default-ool-1234 ~ $ ps aux | grep java
root 9551 0.0 0.0 4296 780 ? Ss 09:43 0:00 /tini -- /startup/startup.sh java -Djava.security.egd=file:/dev/./urandom -jar /helloworld.jar
chronos 9566 0.6 3.5 3308988 144636 ? Sl 09:43 0:12 java -Djava.security.egd=file:/dev/./urandom -jar /helloworld.jar
Can anyone explain what's happening, i.e. who is the chronos user, and why my process is not running as mytestuser?
When you RUN adduser, it assigns a user ID in the image's /etc/passwd file. Your script launches the process using that numeric user ID. When you subsequently run ps from the host, though, it looks up that user ID in the host's /etc/passwd file, and gets something different.
This difference doesn't usually matter. Only the numeric user ID matters for things like filesystem permissions, if you're bind-mounting a directory from the host. For security purposes it's important that the numeric user ID not be 0, but that's pretty universally named root.
When you run a useradd inside the container (or as part of the image build), it adds am entry to the /etc/passwd inside the container. The uid/gid will be in a shared namespace with the host, unless you enable user namespaces. However the mapping of those ids to names will be specific to the filesystem namespace where the process is running. Therefore in this scenario, the uid of mytestuser inside the container happens to be the same uid as chronos on the host.
When running a sh script in docker file, i got the following error:
./upload.sh: 5: ./upload.sh: sudo: not found ./upload.sh: 21:
./upload.sh: Bad substitution
sudo chmod 755 upload.sh # line 5
version=$(git rev-parse --short HEAD)
echo "version $version"
echo "Uploading file"
for path in $(find public/files -name "*.txt"); do
echo "path $path"
WORDTOREMOVE="public/"
echo "WORDTOREMOVE $WORDTOREMOVE"
# cause of the error
newpath=${path//$WORDTOREMOVE/} # Line 21
echo "new path $path"
url=http://localhost:3000/${newpath}
...
echo "Uploading file"
...
done
DockerFile
FROM node:10-slim
EXPOSE 3000 4001
WORKDIR /prod/code
...
COPY . .
RUN ./upload.sh
RUN npm run build
CMD ./DockerRun.sh
Any idea?
If anyone faces the same issue, here how I fixed it
chmod +x upload.sh
git update-index --chmod=+x upload.sh (mandatory if you pushed the file to remote branch before changing its permission)
The docker image you are using (node:10-slim) has no sudo installed on it because this docker image runs processes as user root:
docker run -it node:10-slim bash
root#68dcffceb88c:/# id
uid=0(root) gid=0(root) groups=0(root)
root#68dcffceb88c:/# which sudo
root#68dcffceb88c:/#
When your Dockerfile runs RUN ./upload.sh it will run:
sudo chmod 755 upload.sh
Using sudo inside the docker fails because sudo is not installed, there is no need to use sudo inside the docker because all of the commands inside the docker run as user root.
Simply remove the sudo from line number 5.
If you wish to update the running PATH variable run:
PATH=$PATH:/directorytoadd/bin
This will append the directory "/directorytoadd/bin" to the current path.
I'm trying to deploy metricbeat on openshift, and after many hours of work i cannot have it worked.
The same image is running normally on docker.
Thank you
#Dockerfile
FROM docker.elastic.co/beats/metricbeat:7.2.0
COPY metricbeat.yml /usr/share/metricbeat/metricbeat.yml
USER root
RUN mkdir /var/log/metricbeat \
&& chown metricbeat /usr/share/metricbeat/metricbeat.yml \
&& chown metricbeat /usr/share/metricbeat/metricbeat \
&& chmod go-w /usr/share/metricbeat/metricbeat.yml \
&& chown metricbeat /var/log/metricbeat
COPY entrypoint.sh /usr/local/bin/custom-entrypoint
RUN chmod +x /usr/local/bin/custom-entrypoint \
&& chown metricbeat /usr/local/bin/custom-entrypoint
ENV PATH="/usr/share/metricbeat:${PATH}"
USER metricbeat
ENTRYPOINT [ "/usr/local/bin/custom-entrypoint" ]
#entrypoint.sh
#!/usr/bin/env bash
/usr/share/metricbeat/metricbeat -e --strict.perms=false -c /usr/share /metricbeat/metricbeat.yml
Error: /usr/local/bin/custom-entrypoint: line 2: /usr/share/metricbeat/metricbeat: Permission denied
The Dockerfile shows switching to the root user while setting up the directory structure and permissions when building the image, and finally switching to USER metricbeat to run the container with it.
However, by default OpenShift runs containers with a user with a random UID (from a preconfigured range).
One option is to relax the security policy as Graham Dumpleton suggested.
To make it work without relaxing the security, I'll suggest to change ownership as follows:
RUN chown -R metricbeat:root /usr/share/metricbeat \
&& chmod -R 0775 /usr/share/metricbeat
...or incorporate the above two commands in the first RUN instruction.
I'm new to Docker and ran into the following problem:
In my Dodckerfile I have these lines:
ADD dir/archive.tgz /dir/
RUN tar -xzf /dir/archive2.tar.gz -C /dir/
RUN ls -l /dir/
RUN ls -l /dir/dir1/
The first ls prints out files correctly and I can see that dir1 was created inside dir by the archive, with permissions drwxr-xr-x. But the second ls gives me:
ls: "cannot access /dir/dir1/: No such file or directory"
I thought that if the Docker can see a file, it can access it. Do I need to do some special magic here?
I thought that if the Docker can see a file, it can access it.
In a way you are right, but also missing a piece of info. Those RUN commands are not necessarily sequentially executed since docker operates in layers, and your third RUN command is executed while your first might be skipped. In order to preserve proper execution order you need to put them in same RUN command as such so they end up on the same layer (and are updated together):
RUN tar -xzf /dir/archive2.tar.gz -C /dir/ && \
ls -l /dir/ && \
ls -l /dir/dir1/
This is common issue, most often when this is put in Dockerfile:
RUN apt-get update
RUN apt-get install some-package
Instead of this:
RUN apt-get update && \
apt-get install some-package
Note: This is in line with best practices for usage of RUN command in Dockerfile, documented here: https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#run and avoids possible confusion with caches/layes...
To recreate your problem here is small test to resemble similar setup to yours, depending on actual directory structure in your archive this may differ:
Dummy archive 2 with dir/dir1/somefile.txt created:
mkdir -p ~/test-sowf/dir/dir1 && cd ~/test-sowf && echo "Yay" | tee --append dir/dir1/somefile.txt && tar cvzf archive2.tar.gz dir && rm -rf dir
Dockerfile created in ~/test-sowf with following content
from ubuntu:latest
COPY archive2.tar.gz /dir/
RUN tar xvzf /dir/archive2.tar.gz -C /dir/ && \
ls -l /dir/ && \
ls -l /dir/dir/dir1/
Build command like so:
docker build -t test-sowf .
Gives following result:
Sending build context to Docker daemon 5.632kB
Step 1/3 : from ubuntu:latest
---> 452a96d81c30
Step 2/3 : COPY archive2.tar.gz /dir/
---> Using cache
---> 852ef4f706d3
Step 3/3 : RUN tar xvzf /dir/archive2.tar.gz -C /dir/ && ls -l /dir/ && ls -l /dir/dir/dir1/
---> Running in b2ab281190a2
dir/
dir/dir1/
dir/dir1/somefile.txt
total 8
-rw-r--r-- 1 root root 177 May 10 15:43 archive2.tar.gz
drwxr-xr-x 3 1000 1000 4096 May 10 15:43 dir
total 4
-rw-r--r-- 1 1000 1000 4 May 10 15:43 somefile.txt
Removing intermediate container b2ab281190a2
---> 05b7dfe52e36
Successfully built 05b7dfe52e36
Successfully tagged test-sowf:latest
Note that extracted files are with 1000:1000 as opposed to root:root for the archive, so unless you are not running from some other user (non root) you should not have problems with user, but, depending on your archive you might run into path problems (/dir/dir/dir1 as shown here).
test that file is correct, and contains 'Yay' inside:
docker run --rm --name test-sowf test-sowf:latest cat /dir/dir/dir1/somefile.txt
clean the test mess afterwards (deliberatelynot using rm -rf but cleaning individual files):
docker rmi test-sowf && cd && rm ~/test-sowf/archive2.tar.gz && rm ~/test-sowf/Dockerfile && rmdir ~/test-sowf
For those using docker-compose:
Sometimes when you volume mount a folder/file from one container to another before it exists, it can have weird permissions after it's created
For example if one container is certbot and another is your webserver, certbot will take time to generate the /etc/letsencrypt folder and its contents
From the webserver you might be able to see the folder or its contents with an ls, but not open them. You can see the behavior with a cat * and you'll get back
cat: <files in question>: No such file or directory
One solution is generating the folder at build time with a RUN mkdir -p /directory/of/choice in your dockerfile for the container generating the folder/files. Then the folder will exist and docker will happily mount it to your other container or host machine the way you want it to
I use supervisor to run cron and nginx, the problem is when i try to COPY or VOLUME mount my cron files, it does not run my cron files in /etc/cron.d
But when I exec -it <container_id> bash into the container and create the exact same cron file from inside, it is immediately recognized and runs as it should.
Dockerfile :
FROM phusion/baseimage:latest
ENV TERM xterm
ENV HOME /root
RUN apt-get update && apt-get install -y \
nginx \
supervisor \
curl \
nano \
net-tools
RUN rm -rf /etc/nginx/*
COPY nginx_conf /etc/nginx
COPY supervisor_conf /etc/supervisor/
RUN mkdir -p /var/log/supervisor
COPY crontabs /etc/cron.d/
RUN chmod -R 644 /etc/cron.d/
CMD /usr/bin/supervisord
The cron itself
* * * * * root curl --silent http://127.0.0.1/cronjob/cron_test_docker.php >> /var/www/html/log/docker_test.log 2>&1
cron and nginx run through supervisor
[supervisord]
nodaemon = true
[program:nginx]
command = /usr/sbin/nginx -g "daemon off;"
autostart = true
[program:cron]
command = /usr/sbin/cron -f
autostart = true
The logs inside /var/log/supervisor/ relating to cron for stdout and stderr are empty.
I also tried stripping out supervisor and running cron on its own through phusion and CMD cron -f but got the same issue of it not working when the source is external(COPY or VOLUME) and magically works when created inside the container.
Initially believed it to be a permissions issue and tried chmod 644 (as this was the permission a file created in the container had) on all files that were the result of COPY into.
RUN chmod 644 /etc/cron.d/
After which tried every possible combination of permissions with rwx to no avail.
Also, tried to append the line of the cronjob into /etc/crontab but it is not recognized in crontab -l.
COPY crontab /tmp/crontab
RUN cat crontab >> /etc/crontab
It would be really handy if it worked just when it was created through COPY or VOLUME as it is a hassle to create it manually in the container everytime.
Any help would be greatly appreciated!
Edit 1 :
Some additional information about the file permissions after COPY or VOLUME.
When I perform
COPY crontabs /etc/cron.d/
RUN chmod -R 644 /etc/cron.d/
Inside the container running ls -l inside /etc/cron.d/ shows
-rw-r--r-- 1 root root 118 Jul 20 11:03 wwwcron-cron-docker_test
When I mount the folder through my docker-compose through VOLUME
volumes:
- ./server/crontabs:/etc/cron.d
ls -l shows
-rwxrwxrwx 1 1000 staff 118 Jul 20 11:03 wwwcron-cron-docker_test
In addition if I manually create the cron file in the container it looks like this and this works
-rw-r--r-- 1 root root 118 Jul 22 15:50 wwwcron-cron-docker_test_inside_docker
Clearly there are very different permissions and ownership when making COPY or VOLUME. But making a COPY with exact permissions does not work but seems to work when created in the container.
Thanks to #BMitch was able to find the issue which was related to line endings since my host machine was windows and the cron file origin was windows as well there was a disparity in the line endings thereby cron did not pick it up automatically.
I added this line to my Dockerfile and it works like a charm
RUN find /etc/cron.d/ -type f -print0 | xargs -0 dos2unix
And iterating on that the size of the file is indeed 1 byte smaller when a dos2unix is run, so you can verify if this operation indeed occurred.
-rw-r--r-- 1 root root 117 Jul 25 08:33 wwwcron-cron-docker_test
Have you tried installing the crontab as a separate command in the Dockerfile?
i.e.
...
COPY crontabs /path/to/crontab.txt
RUN crontab -u myUser /path/to/crontab.txt
...