I use Docker Desktop run a container with a Volume(/data) to a directory in my macos.
Then I created a softlink:
in container centos-test's /data, I can find it.
sh-4.4# ls -l
total 0
lrwxr-xr-x 1 root root 72 May 25 03:30 ssl-cdn4-control -> /Users/jack/ScotchBox/my-project/public/ssl-proj/ssl-proj-control
you see the it link to my macos directory /Users/jack/ScotchBox/my-project/public/ssl-proj/ssl-proj-control.
but I can not access it, I can't cd into it with error:
sh-4.4# cd ssl-proj-control
sh: cd: ssl-proj-control: No such file or directory
for proving in my macos Terminal I can ls it:
$ ls /Users/jack/ScotchBox/my-project/public/ssl-proj/ssl-proj-control
ssl-proj-control-backend ssl-proj-control-frontend
Related
AFAIK the error means that there is no file named agent.28198 in the mentioned directory, but upon listing its contents the file (local socket file) is clearly there. What could be the reason for docker's inability to get the socket?
Here is the full command scenario:
$ eval $(ssh-agent -s)
Agent pid 28199
$ ssh-add
Enter passphrase for /home/ubuntu/.ssh/id_rsa:
Identity added: /home/ubuntu/.ssh/id_rsa (/home/ubuntu/.ssh/id_rsa)
$ DOCKER_BUILDKIT=1 docker build --ssh default -t my_image .
could not parse ssh: [default]: stat /tmp/ssh-qpL02JZP5k7x/agent.28198: no such file or directory
$ ls -l /tmp/ssh-qpL02JZP5k7x/
total 0
srw------- 1 ubuntu ubuntu 0 Sep 9 08:50 agent.28198
Docker is installed from snap - that's the culprit. It does not have access to /tmp folder because of that. To remediate, remove the package from snap - sudo snap remove docker and install it via dpkg (link).
I have an issue when running a docker container.
➜ bc_to_influx git:(master) ✗ docker run registry.gitlab.com/xxx/bc_to_influx:latest
standard_init_linux.go:207: exec user process caused "no such file or directory"
When I debug, I enter in the stopped container:
docker commit 0db73216baaf user/test_image
docker run -ti --entrypoint=sh user/test_image
on ls command, I can only my executable:
/bc2influx # ls -al
total 20552
drwxr-xr-x 1 root root 4096 Jun 6 10:32 .
drwxr-xr-x 1 root root 4096 Jun 6 11:53 ..
-rwxr-xr-x 1 root root 21034520 Jun 6 10:29 bc2influx
/bc2influx #
but when I try to execute, I get:
/bc2influx # ./bc2influx
sh: ./bc2influx: not found
I can vi, cat the execute, but not execute it
here is my Dockerfile
FROM alpine
WORKDIR /bc2influx/
COPY ./release/bc2influx /bc2influx/
RUN ls -al /bc2influx/
CMD [ "./bc2influx" ]
I previously build my executable with:
go build -o ./release/bc2influx -v -ldflags '-extldflags "-static"' ./...
Any idea what's going on ?
Looks like musl library issue try this build command go build -ldflags="-s -w".
How can I control which user owns the files I copy in and out of a container?
The docker cp command says this about file ownership:
The cp command behaves like the Unix cp -a command in that directories are copied recursively with permissions preserved if possible. Ownership is set to the user and primary group at the destination. For example, files copied to a container are created with UID:GID of the root user. Files copied to the local machine are created with the UID:GID of the user which invoked the docker cp command. However, if you specify the -a option, docker cp sets the ownership to the user and primary group at the source.
It says that files copied to a container are created as the root user, but that's not what I see. I create two files owned by user id 1005 and 1006. Those owners are translated into the container's user namespace. The -a option seems to make no difference when I copy the file into a container.
$ sudo chown 1005:1005 test.txt
$ ls -l test.txt
-rw-r--r-- 1 1005 1005 29 Oct 6 12:43 test.txt
$ docker volume create sandbox1
sandbox1
$ docker run --name run1 -vsandbox1:/data alpine echo OK
OK
$ docker cp test.txt run1:/data/test1005.txt
$ docker cp -a test.txt run1:/data/test1005a.txt
$ sudo chown 1006:1006 test.txt
$ docker cp test.txt run1:/data/test1006.txt
$ docker cp -a test.txt run1:/data/test1006a.txt
$ docker run --rm -vsandbox1:/data alpine ls -l /data
total 16
-rw-r--r-- 1 1005 1005 29 Oct 6 19:43 test1005.txt
-rw-r--r-- 1 1005 1005 29 Oct 6 19:43 test1005a.txt
-rw-r--r-- 1 1006 1006 29 Oct 6 19:43 test1006.txt
-rw-r--r-- 1 1006 1006 29 Oct 6 19:43 test1006a.txt
When I copy files out of the container, they are always owned by me. Again, the -a option seems to do nothing.
$ docker run --rm -vsandbox1:/data alpine cp /data/test1006.txt /data/test1007.txt
$ docker run --rm -vsandbox1:/data alpine chown 1007:1007 /data/test1007.txt
$ docker cp run1:/data/test1006.txt .
$ docker cp run1:/data/test1007.txt .
$ docker cp -a run1:/data/test1006.txt test1006a.txt
$ docker cp -a run1:/data/test1007.txt test1007a.txt
$ ls -l test*.txt
-rw-r--r-- 1 don don 29 Oct 6 12:43 test1006a.txt
-rw-r--r-- 1 don don 29 Oct 6 12:43 test1006.txt
-rw-r--r-- 1 don don 29 Oct 6 12:47 test1007a.txt
-rw-r--r-- 1 don don 29 Oct 6 12:47 test1007.txt
-rw-r--r-- 1 1006 1006 29 Oct 6 12:43 test.txt
$
You can also change the ownership by logging in as root user into the container :
docker exec -it --user root <container-id> /bin/bash
chown -R <username>:<groupname> <folder/file>
In addition to #Don Kirkby's answer, let me provide a similar example in bash/shell script for the case that you want to copy something into a container while applying different ownership and permissions than those of the original file.
Let's create a new container from a small image that will keep running by itself:
docker run -d --name nginx nginx:alpine
Now wel'll create a new file which is owned by the current user and has default permissions:
touch foo.bar
ls -ahl foo.bar
>> -rw-rw-r-- 1 my-user my-group 0 Sep 21 16:45 foo.bar
Copying this file into the container will set ownership and group to the UID of my user and preserve the permissions:
docker cp foo.bar nginx:/foo.bar
docker exec nginx sh -c 'ls -ahl /foo.bar'
>> -rw-rw-r-- 1 4098 4098 0 Sep 21 14:45 /foo.bar
Using a little tar work-around, however, I can change the ownership and permissions that are applied inside of the container.
tar -cf - foo.bar --mode u=+r,g=-rwx,o=-rwx --owner root --group root | docker cp - nginx:/
docker exec nginx sh -c 'ls -ahl /foo.bar'
>> -r-------- 1 root root 0 Sep 21 14:45 /foo.bar
tar options explained:
c creates a new archive instead of unpacking one.
f - will write to stdout instead of a file.
foo.bar is the input file to be packed.
--mode specifies the permissions for the target. Similar to chown, they can be given in symbolic notation or as an octal number.
--owner sets the new owner of the file.
--group sets the new group of the file.
docker cp - reads the file that is to be copied into the container from stdin.
This approach is useful when a file needs to be copied into a created container before it starts, such that docker exec is not an option (which can only operate on running containers).
Just a one-liner (similar to #ramu's answer), using root to make the call:
docker exec -u 0 -it <container-id> chown node:node /home/node/myfile
In order to get complete control of file ownership, I used the tar stream feature of docker cp:
If - is specified for either the SRC_PATH or DEST_PATH, you can also stream a tar archive from STDIN or to STDOUT.
I launch the docker cp process, then stream a tar file to or from the process. As the tar entries go past, I can adjust the ownership and permissions however I like.
Here's a simple example in Python that copies all the files from /outputs in the sandbox1 container to the current directory, excludes the current directory so its permissions don't get changed, and forces all the files to have read/write permissions for the user.
from subprocess import Popen, PIPE, CalledProcessError
import tarfile
def main():
export_args = ['sudo', 'docker', 'cp', 'sandbox1:/outputs/.', '-']
exporter = Popen(export_args, stdout=PIPE)
tar_file = tarfile.open(fileobj=exporter.stdout, mode='r|')
tar_file.extractall('.', members=exclude_root(tar_file))
exporter.wait()
if exporter.returncode:
raise CalledProcessError(exporter.returncode, export_args)
def exclude_root(tarinfos):
print('\nOutputs:')
for tarinfo in tarinfos:
if tarinfo.name != '.':
assert tarinfo.name.startswith('./'), tarinfo.name
print(tarinfo.name[2:])
tarinfo.mode |= 0o600
yield tarinfo
main()
In my container, I have a folder that contains a relative symlink to a parent's parent subfolder:
$ docker run --name symlink-test ubuntu bash -c "mkdir -p /1/2; touch /1/2/a; ln -s ../../usr /1/2; touch /1/2/z; ls -l /1/2" :(
total 4
-rw-r--r--. 1 root root 0 Mar 4 03:37 a
lrwxrwxrwx. 1 root root 9 Mar 4 03:37 usr -> ../../usr
-rw-r--r--. 1 root root 0 Mar 4 03:37 z
I want to copy the folder /1 to the host. However, I always get the following error:
$ docker cp symlink-test:/1/2
invalid symlink "/tmp/2/usr" -> "../../usr"
$ ls 2
a
Copying the files fails and docker cp aborts after it sees the symlink.
There are some Docker bugs related to this, but they are either fixed or were caused by something different:
FATA[0000] invalid symlink when copying a symlink with relative parent paths
Error attempting to cp directory containing symlink
I'm running Docker 1.10.2 on Fedora 23.
Is the above behavior of docker cp intended or is it a bug? If it is intended, what's the reasoning behind it?
In my case, I get it to work by:
docker cp -L container:/path/to/file.png current/directory/file.png
I don't know whether this behavior is intentional, but here's a workaround:
docker cp my-container:/path/to/dir - | tar -x
You could docker exec -it <container> sh then find the symlink of the file you want to copy readlink -f symlinkName and then docker cp that one instead.
The "docker cp -L " will fail on relative symlinks, you should fix them to be full path instead.
e.g.
invalid symlink "/tmp/2/usr" -> "../../usr"
usr is a relative symlink to ../../usr.
instead change it to: usr -> /full/path/to/usr/
another example:
invalid symlink "/working_folder/XXX/lib" -> "../../lib"
because: lib -> ../../lib/
fixed it: lib -> /full/path/to/lib/
and it will work by:
docker cp -L /full/path/to/working_folder:/working_folder/
I am using a very simple .travis.yml to compile a cpp program via docker in Travis-CI. (My motivation is to experiment running docker in Travis CI.)
sudo: required
services:
- docker
before_install:
- docker pull glot/clang
script:
- sudo docker run --rm -v "$(pwd)":/app -w /app glot/clang g++ main.cpp
But the build is failing with following error:
/usr/bin/ld: cannot open output file a.out: Permission denied. This is regardless of whether I use sudo or not. Can someone help me out figuring out the root cause and help fix this? Thanks.
I would suggest you to set mounting path explicitly rather then doing it with $(pwd). Then you need to check the permissions from inside the container. Try something like that:
sudo docker run --rm -v "$(pwd)":/app -w /app glot/clang stat /app
This will show folder permissions. Probably noone is able to write into this folder.
Also you should avoid building your software using root permissions, it's not secure. Create non-priveleged user and use them when you running the compiler.
UPD:
I cannot reproduce this issue with docker 1.6.0, probably it's caused by some filesystem settings persisted by Travis-CI virtual machine. This is what I have on my localhost:
➜ /tmp mkdir /tmp/code
➜ /tmp echo "int main(){}" > /tmp/code/main.cpp
➜ /tmp echo "g++ main.cpp && ls -l" > /tmp/code/build.sh
➜ /tmp docker run --rm -v /tmp/code:/app -w /app glot/clang bash /app/build.sh
total 20
-rwxr-xr-x 1 glot glot 8462 Dec 30 10:19 a.out
-rwxrwxr-x 1 glot glot 22 Dec 30 10:17 build.sh
-rw-rw-r-- 1 glot glot 13 Dec 30 10:10 main.cpp
As you see, the resulting binary appears in /app folder