Access is denied while docker save - docker

I am trying to save a docker image in windows so that I can load to another Linux box,in between while saving the images in windows, I got an error stating access is denied to rename the docker temp file.
I checked the permission everything looks fine, in fact I can able to edit. Any help here is highly appreciable. I am using docker 1.11.0
docker save -o . <imgID>
rename .docker_temp_742575903 .: Access is denied.

Never mind, along with path I need to give my new file name that docker wanted to create and it don't happen implicitly, in my cases I gave
docker save -o ./<tar name that you wanted docker to create> <imgID>

For the similar issue but on unix:
root#linux:/opt/docker# docker save -o ./presto.tar starburstdata/presto
open .docker_temp_359214587: permission denied
You can use different syntax to save the image as a workaround:
root#linux:/opt/docker# docker save starburstdata/presto > presto.tar
root#linux:/opt/docker# ls -l
razem 1356196
-rw-r--r-- 1 root root 1388737024 maj 23 11:16 presto.tar

I workaround the problem on linux by changing the current forlder permission to 777. Make sure your current direct :
mkdir ~/docker-images
cd ~/docker-images
chmod 777 ./
sudo docker save <img_id> -o ./<filename>

You are not giving image name and it's creating some temp name by default but, while renaming that it's throwing error. you can use this command to solve this problem.
docker save -o <some custom name with path> <imgID or REPOSITORY:TAG>
something like this if you want to create in present directory
docker save -o ./ubuntu_image.tar ubuntu:latest
or
docker save -o ./ubuntu_image.tar eat546t
If you want to create in specific location
docker save -o path/of/image/ubuntu_image.tar ubuntu:latest

Related

docker cp returning 'invalid output path' error

Use-case: copy a file containing some creds from local machine directory to existing and already created Docker container/volume
Per the documentation on using docker cp, I constructed my command line statement like this:
docker cp mynodered:/Users/<myUserName>/Documents/nodered-volume/creds.json /data/creds.json
However, I consistently get an error returned:
invalid output path: directory "/data" does not exist
Eventually, I found that changing the syntax of the docker cp statement to:
docker cp /Users/<myUserName>/Documents/nodered-volume/creds.json mynodered:/data/creds.json resolved the issue
troubleshooting tl;dr
I didnt see this documented anywhere, but the syntax that worked for me was docker cp <current local filepath> containerName:/<intended container filepath>
Make sure there is not a space between containerName: and /<intended container filepath>
However, I consistently get an error returned: invalid output path:
directory "/data" does not exist
Above error message you're getting since such directory doesn't exists
# ensure /data exists if not create directory
mkdir -pv /data
# now copy whatever from container to host directory
docker cp <container-id-or-name>:/absolute/path/of/your/file /data

Why is docker not completely deleting my file?

I am trying to build using:
FROM mcr.microsoft.com/dotnet/core/sdk:2.1 AS builder
COPY pythonnet/src/ pythonnet/src
WORKDIR /pythonnet/src/runtime
RUN dotnet build -f netstandard2.0 -p:DefineConstants=\"MONO_LINUX\;XPLAT\;PYTHON3\;PYTHON37\;UCS4\;NETSTANDARD\" Python.Runtime.15.csproj
# copy myApp csproj and restore
COPY src/myApp/*.csproj /src/myApp/
WORKDIR /src/myApp
RUN dotnet restore
# now copy everything else as separate docker step
# (copy to staging folder, remove csproj, and copy down - so we don't overwrite project above)
WORKDIR /
COPY src/myApp/ ./staging/src/myApp
RUN rm ./staging/src/myApp/*.csproj \
&& cp -r ./staging/* ./ \
&& rm -rf ./staging
This was working fine, and in Windows 10 still does, but in CentOS 7 I get:
Step 10/40 : RUN rm ./staging/src/myApp/*.csproj && cp -r ./staging/* ./ && rm -rf ./staging
---> Running in 6b17ae0fae89
cp: cannot stat './staging/src/myApp/myApp.csproj': No such file or directory
Using ls instead of cp throws a similar file not found error, so it looks like Docker still knows about myApp.csproj but cannot see it since it has been removed.
Is there a way around this? I have tried using rsync but similar problems.
I simply ignored the issue by tacking on ;exit 0 on the offending lines. Not great, but does the job.
EDIT: This worked for me as I cannot upgrade the version of CemtOS. If you can, check out Alexander Block's answer.
I don't know specifically how to solve this problem as there's a lot of context in the filesystem that you haven't (and probably can't) share with us.
My suggestion on a strategy is that you:
comment out all lines from the failing one 'til the end of the Dockerfile
build the partial image
docker exec -it [image] bash to jump into the image
poke around and figure out what's going wrong
repeat 1-4 until things work as expected
It's not as fun as a perfectly insightful answer of course but this is a relentlessly effective algorithm even if it's tedious and annoying.
EDIT
My wild guess is that somehow, someway the linux machine doesn't have the file where it's expected for some reason and so it doesn't get copied into the image at all and that's why the docker build process can't find it. But there's no way to know without debugging the build process.
cp -r will stop and fail with that cannot stat <file> message whenever the source is a symbolic link and the target of the link does not exist. It will not copy links to non-existent files.
So my guess is that after you run COPY src/myApp/ ./staging/src/myApp your file ./staging/src/myApp/myApp.csproj is a symbolic link to a non-existent file. Why the following RUN rm ./staging/src/*.csproj doesn't remove it and stays silent about that, I don't know the answer to that.
To help demonstrate my theory, see below showing cp failing on a symlink on Centos 7.
[547] $ docker run --rm -it centos:7
Unable to find image 'centos:7' locally
7: Pulling from library/centos
524b0c1e57f8: Pull complete
Digest: sha256:e9ce0b76f29f942502facd849f3e468232492b259b9d9f076f71b392293f1582
Status: Downloaded newer image for centos:7
[root#a47b77cf2800 /]# ln -s /tmp/foo /tmp/bar
[root#a47b77cf2800 /]# ls -l /tmp/foo
ls: cannot access /tmp/foo: No such file or directory
[root#a47b77cf2800 /]# ls -l /tmp/bar
lrwxrwxrwx 1 root root 8 Jul 6 05:44 /tmp/bar -> /tmp/foo
[root#a47b77cf2800 /]# cp /tmp/foo /tmp/1
cp: cannot stat '/tmp/foo': No such file or directory
[root#a47b77cf2800 /]# cp /tmp/bar /tmp/2
cp: cannot stat '/tmp/bar': No such file or directory
Notice how you copy reports that it cannot stat either the source or destination of the symbolic link. It's the exact symptom you are seeing.
If you just want to get past this, you can try tar instead of cp or rsync.
Instead of
cp -r ./staging/* ./
use this instead:
tar -C ./staging -cf - . | tar -xf -
tar will happily copy symlinks that don't exist.
You've very likely encountered a kernel bug that has been fixed a long time ago in more recent kernels. As of https://de.wikipedia.org/wiki/CentOS, CentOS 7 is based on the Linux Kernel 3.10, which is pretty old already and does not have good Docker support in regard to the storage backend (overlay filesystem).
CentOS tried to backport needed fixes and features into 3.10, but seems to not have succeeded fully when it comes to overlay support. There are multiple (slightly different) issues regarding this which you can find when searching for "CentOS 7 overlay driver" on the internet. All of them have in common that removing of files from parent overlays does not work as expected.
For me it looks like rm calls on files return success, even though the files are not fully removed. Directory listings (e.g. by ls or shell expansion as in your case) then still list the file, while accessing the file then fails (no matter if read, write or deletion of the file).
I assume that what you've seen is just another incarnation of these issues. You should either switch to CentOS 8 or upgrade your Kernel (which is not officially supported by CentOS as far as I understand). Or even more radical, switch to a distribution which is used more often in combination with Docker and generally offers more recent Kernels, e.g. Debian or Ubuntu.

Can't load docker from saved tar image

I committed my container as an image, then used "docker save" to save the image as a tar. Now I'm trying to load the tar on a GCC Centos 7 instance. I packaged it locally on my Ubuntu machine.
I've tried: docker load < image.tar and sudo docker load < image.tar
I also tried chmod 777 image.tar to see if the issue was permissions related.
Each time I try to load the image I get a variation of this error (the xxxx bit is a different number every time):
open /var/lib/docker/tmp/docker-import-xxxxxxxxx/repositories: no such file or directory
I think it might have something to do with permissions, because when I try to cd into /var/lib/docker/ I run into permissions issues.
Are there any other things I can try? Or is it likely to be a corrupted image?
There was a simple answer to this problem
I ran md5 checksums on the images before and after I moved them across systems and they were different. I re-transferred and all is working now.
for me the problem was that I added .tgz as an input. Once I extracted the tarball - there were the .tar file. Using that file as input it was successful.
The sequence of the BUILD(!) command is important, try this sequence:
### 1/3: Build it:
# docker build -f MYDOCKERFILE -t MYCNTNR .
### 2/3: Save it:
# docker save -o ./mycontainer.tar MYCNTNR
### 3/3: Copy it to the target machine:
# rsync/scp/... mycontainer.tar someone#target:.
### 4/4: On the target, load it :
# docker load -i MYCNTNR.tar
<snip>
Loaded image: MYCNTNR
I had the same issue and the following command fixed it for me:
cat <file_name>.tar | docker import - <image_name>:<image_version/tag>
Ref: https://webkul.com/blog/error-open-var-lib-docker-tmp-docker-import/

How to get files generated by docker run to host

I have run docker run to generate a file
sudo docker run -i --mount type=bind,src=/home/mathed/Simulation/custom_desman/1/Strains/Simulation2/Assembly,target=/home/mathed/Simulation/custom_desman/1/Strains/Simulation2/Assembly 990210oliver/mycc.docker:v1 MyCC.py /home/mathed/Simulation/custom_desman/1/Strains/Simulation2/Assembly/final_contigs_c10K.fa
This is the message I've got after executing.
20181029_0753
4mer
1_rename.py /home/mathed/Simulation/custom_desman/1/Strains/Simulation2/Assembly/final_contigs_c10K.fa 1000
Seqs >= 1000 : 32551
Minimum contig lengh for first stage clustering: 1236
run Prodigal.
/opt/prodigal.linux -i My.fa -a gene.aa -d gene.nuc -f gbk -o output -s potential_genes.txt
run fetchMG.
run UCLUST.
Get Feature.
2_GetFeatures_4mer.py for fisrt stage clustering
2_GetFeatures_4mer.py for second stage clustering
3_GetMatrix.py 1236 for fisrt stage clustering
22896 contigs entering first stage clustering
Clustering...
1_bhsne.py 20
2_ap.py /opt/ap 500 0
Cluster Correction.
to Split and Merge.
1_ClusterCorrection_Split.py 40 2
2_ClusterCorrection_Merge.py 40
Get contig by cluster.
20181029_0811
I now want to get the files generated by MyCC.py to host.
After reading Copying files from Docker container to host, I tried,
sudo docker cp 642ef90103be:/opt /home/mathed/data
But I got an error message
mkdir /home/mathed/data/opt: permission denied
Is there a way to get the files generated to a directory /home/mathed/data?
Thank you.
I assume your dest path does not exist.
Docker cp doc stats that in that case :
SRC_PATH specifies a directory
DEST_PATH does not exist
DEST_PATH is created as a directory and the contents of the source directory are copied into this directory
Thus it is trying to create a directory fro DEST_PATH... and docker must have the rights to do so.
According to the owner of the DEST_PATH top existing directory, you may have to either
create the directory first so that it will not be created by docker and give it the correct rights (looks like it has no rights to do so) using sudo chown {user}:{folder} + chmod +x {folder}
change the rights to the parent existing directory (chown + chmod again),
switch to path where docker is allowed to write.

Docker non-root access: Error loading config file:stat /home/wu/.docker/config.json

I finished the docker installation with non-root access, namely
1.define a docker user group
2.add my current user to the docker group
pass the test
docker run --rm hello-world
but when I start to provision my docker containers, somewhere in the procedure I got the error:
Error loading config file:stat /home/user/.docker/config.json:Permission Denied
seems to be docker is trying to access some resources but got denied
What is happening here? How could I fix this ?
thx
It may be that the .docker folder and the .docker/json.config file is not owned by the current user.
Try this:
sudo chown $USER:docker ~/.docker
sudo chown $USER:docker ~/.docker/config.json
sudo chmod g+rw ~/.docker/config.json
I've got the same error after upgrate from 1.4 to latest.
Fix: manually create .docker/config.json ( put empty json object there {} ) and set ownership like this :
-rw-r--r-- 1 myuser docker /home/myuser/.docker/config.json
Simply grant your current user to access the /.docker folder and your warning will get vanished in thin air.
Run this on your terminal:
sudo chown $USER:docker ~/.docker

Resources