Permission denied when trying to edit .bashrc in a container - docker

Trying to run the command below:
docker container exec container-name echo '. $HOME/.asdf/asdf.sh' >> /root/.bashrc
but I'm getting the error below:
warning: An error occurred while redirecting file '/root/.bashrc'
open: Permission denied
I just created the container.
I can get into the container as root and execute the same command without error.
Does anybody know what I'm missing?

You are trying to write in your host's root user .bashrc. It is good that you didn't run this as user root on host.
I think this is what you actually want:
docker container exec container-name sh -c 'echo . $HOME/.asdf/asdf.sh >> /root/.bashrc'
Also you can use /root instead of $HOME since you already use that in the second part of the command.

Related

docker exec -> permission denied when executing through ssh

I am trying to execute a command on a docker container that is running in a remote server. I am running most commands through ssh and they all work correctly. However, this command modifies the file /etc/environment and I get a "permission denied" error.
The command in question is docker exec container_id echo 'WDS_SOCKET_PORT=XXXXX' >> /etc/environment
If I run the command from the docker host, it works
If I run a simple command remotely using ssh user#ip docker exec container_id ls, it works
If I run this command remotely using ssh user#ip docker exec container_id echo 'WDS_SOCKET_PORT=XXXXX' >> /etc/environment I get sh: 1: cannot create /etc/environment: Permission denied
I tried adding the option -u 0 to the docker exec command with no luck.
I don't mind making changes to the Dockerfile since I can kill, remove or recreate this container with no problem.
The error isn't coming from docker or ssh, it's coming from your shell that parses the command you want to run. You are trying to modify the file on your host. To do io redirection inside the container, you need to run a shell there and parse the command with that shell.
ssh user#ip "docker exec container_id /bin/sh -c 'echo \"WDS_SOCKET_PORT=XXXXX\" >> /etc/environment'"
EDIT: Note that the whole docker command should be surrounded by quotes. I believe this is because ssh might otherwise parse different parts of the command as parameters of the docker command. This way, each sub-command is clearly delimited.

How can I check if I directory exists in a Docker container?

I have a docker cp command in my script to copy a container directory to my host machine. In some cases the directory will not exist in docker, and I get "Error: No such container:path"
Is there a way to check if this directory exists in the container, and only perform docker cp if it does?
The reason for this is that not having this directory in the container is normal for some situations, so I'd like to avoid the error message.
To clarify: the source directory I want to copy doesn't exist in the container, so I cannot copy it. The destination directory exists.
The container is stopped, so docker exec doesn't work.
One solution could be to execute the following command:
docker exec container_id [ -d "/dir_path" ] && echo "Exists" || echo "Does not exist"
Then you can determine the directory existence from the returned message.

Update PATH in Centos Docker image (alternative to Dockerfile ENV)

I'm provisioning docker Centos image with Packer and using bash scripts instead of Dockerfile to configure image (this seems to be the Packer way). What I can't seem to figure out is how to update PATH variable so that my custom binaries can be executed like this:
docker run -i -t <container> my_binary
I have tried putting .sh file in /etc/profile.d/ folder and also writing directly to /etc/environment but none of that seems to take effect.
I'm suspecting it has something to do with what shell Docker uses when executing commands in a disposable container. I thought it was Bourne Shell but as mentioned earlier neither /etc/profile.d/ nor /etc/environment approach worked.
UPDATE:
As I understand now, it is not possible to change environment variables in a running container due to reasons explained in #tgogos answer. However I don't believe this is an issue in my case since after Packer is done provisioning the image, it commits it and uploads to Docker Hub. More accurate example would be as follows:
$ docker run -itd --name test centos:6
$ docker exec -it test /bin/bash
[root#006a9c3195b6 /]# echo 'echo SUCCESS' > /root/test.sh
[root#006a9c3195b6 /]# chmod +x /root/test.sh
[root#006a9c3195b6 /]# echo 'export PATH=/root:$PATH' > /etc/profile.d/my_settings.sh
[root#006a9c3195b6 /]# echo 'PATH=/root:$PATH' > /etc/environment
[root#006a9c3195b6 /]# exit
$ docker commit test test-image:1
$ docker exec -it test-image:1 test.sh
Expecting to see SUCCESS printed but getting
OCI runtime exec failed: exec failed: container_linux.go:296: starting container process caused "exec: \"test.sh\": executable file not found in $PATH": unknown
UPDATE 2
I have updated PATH in ~/.bashrc which lets me execute following:
$ docker run -it test-image:1 /bin/bash
[root#8f821c7b9b82 /]# test.sh
SUCCESS
However running docker run -it test-image:1 test.sh still results in
docker: Error response from daemon: OCI runtime create failed: container_linux.go:296: ...
I can confirm that my image CMD is set to "/bin/bash". So can someone explain why running docker run -it test-image:1 test.sh doesn't source ~/.bashrc?
A few good points are mentioned at:
How to set an environment variable in a running docker container (also check the link to the relevant github issue).
and Docker - Updating Environment Variables of a Container
where #BMitch mentions:
Destroy your container and start a new one up with the new environment variable using docker run -e .... It's identical to changing an environment variable on a running process, you stop it and restart with a new value passed in.
and in the comments section, he adds:
Docker doesn't provide a way to modify an environment variable in a running container because the OS doesn't provide a way to modify an environment variable in a running process. You need to destroy and recreate.
update: (see the comments section)
You can use
docker commit --change "ENV PATH=your_new_path_here" test test-image:1
/etc/profile is only read by bash when invoked by a login shell.
For more information about which files are read by bash on startup see this article.
EDIT: If you change the last line in your example to:
docker exec -it test bash -lc test.sh it works as you expect.

Docker container write permissions

I have some docker containers running on my machine, one of them being container_1
I am able to access container_1's cli using
ant#ant~/D/m/l/db> docker exec -it container_1 bash
daemon#1997cc093b24:/$
This allows me to go to container_1's cli but with no write permissions. The following commands give a permission denied error
ant#ant~/D/m/l/db> docker exec -it container_1 touch test.txt
bash: test.txt: Permission denied
ant#ant~/D/m/l/db>docker exec -it container_1 bash
daemon#1997cc093b24:/$ touch test.txt
bash: test.txt: Permission denied
Also tried using --previleged option but the problem persisted
ant#ant~/D/m/l/db> docker --previleged=true exec -it container_1 touch test.txt
bash: test.txt: Permission denied
So I have 2 questions
How do permissions in docker work?
Is this kind of modification to a docker filesystem recommended? If not why?
I have recently started using docker. Please tolorate the amature question. Thanks in advance :)
Docker runs commands as a linux user which is bound to linux filesystem permissions. So the answer to this question depends on:
The uid you are running commands as (this defaults to root, but can be overridden in your image with a USER command in the Dockerfile, or on the docker run cli, or within your docker-compose.yml file).
The location where your command runs since you are using a relative path. This will default to /, but again can be overridden by changing the working directory in various ways, most often with the WORKDIR within the Dockerfile.
The directory and file permissions at that location.
Use ls -al inside the container to see the current permissions. Use id to see the current uid. With docker exec you can pass a flag to change the current user. And to change permissions, you can use chmod to change the permissions themselves, chown to change the user ownership, and chgrp to change the group ownership. e.g.:
docker exec -u root container_1 chmod 777 .
That command will allow any user to read or write to the current folder inside the container by running as the root user.
This assumes you haven't enabled any other security with SE Linux or AppArmor.

Permission denied inside Docker container

I have a started container gigantic_booth and I want to create the directory /etc/test:
# docker exec -it gigantic_booth /bin/bash
$ mkdir /etc/test
$ mkdir: cannot create directory '/etc/test': Permission denied
And sudo command is not found. I don't want to create this directory in image-build-time but once is started.
How can I do?
Thanks :)
I'm using jenkins image and I have just read that it has root access disabled for security reasons. https://github.com/jenkinsci/docker#installing-more-tools
I have re-built the image with this Dockerfile:
FROM jenkins
USER root
and now it works properly, it is not so secure, though.
Or just use docker exec -u thejenkinsuser.

Resources