I am trying to docker cp a directory and everything inside to a container but I am getting this error:
$ docker cp clink 2eca:.
Error response from daemon: Error processing tar file(exit status 1): open /clink/jobs/target/streams/$global/assemblyOption/$global/streams/assembly/82b354e42adbb42665af515b27b02de840e038ee_2df49b1995a6c8135f35e880f8876f7513ab872d_da39a3ee5e6b4b0d3255bfef95601890afd80709/org/scalacheck/ArbitraryArities$$anonfun$arbTuple11$1$$anonfun$apply$99$$anonfun$apply$100$$anonfun$apply$101$$anonfun$apply$102$$anonfun$apply$103$$anonfun$apply$104$$anonfun$apply$105$$anonfun$apply$106$$anonfun$apply$107$$anonfun$apply$108$$anonfun$apply$109.class: file name too long
I am pretty new to docker to I'm not sure why this is happening or how to fix it
Posted issue onto dockers githug:
https://github.com/docker/docker/issues/31353
It is indeed a bug. It doesn't happen on 1.12.2.
Thanks for opening the issue:
https://github.com/docker/docker/issues/31353
Related
I am trying to create an Anonymous Volume with docker but it throws this error:
docker: Error response from daemon: OCI runtime create failed: invalid mount {Destination:C:/Program Files/Git/app/node_modules Type:bind Source:/var/lib/docker/volumes/51c96f13f0232b1d052a91fdb0d8ed60881420ee214aa46ae85e16dfa4bbece0/_data Options:[rbind]}: mount destination C:/Program Files/Git/app/node_modules not absolute: unknown.
I came across this issue today while running hyperledger fabric on windows OS, it seems to be a mounting issue. The error went away when I ran the below command on git bash.
export MSYS_NO_PATHCONV=1
Firstly you must open up the command prompt (powershell/cmd) at your working directory, if feedback is your working directory open up the cmd at feedback and then, in windows, in the powershell/cmd use:
docker run -p (localhost port):(container port) -v %cd%:/app
instead of $(pwd) use %cd% it worked for me. I probably think the issue is with gitbash, you could use powershell instead, I came across the exact same problem just now and gitbash gave the exact same error.
I got a permanent fix for windows, docker volumes has an issue in windows https://forums.docker.com/t/volume-mounts-in-windows-does-not-work/10693/7. So we could use:
docker run -p 8080:3000 -v /app/node_modules -v //d/Desktop/Docker/react-app/front-end:/app
if your files where in c drive use //c/Desktop/Docker/react-app/front-end:/app instead of //d/Desktop/Docker/react-app/front-end:/app
And yeah remember to use powershell not gitbash it also has an issue.
If your using a react application make sure that you have .env file with CHOKIDAR_USEPOLLING=true
Also, be sure you are running the docker command in the same folder where the docker-compose.yml lives. Otherwise, the docker command can execute fine until it reaches the point where it loads the volume and then trying to access the relative path might cause it to throw the above error.
I have a container, it will exit caused by core dumped. However I want to get a log from container path /opt/big/err.log
If I start the container, it will core dumped and exit quickly.
And I tried to mount the directory using docker run -v $(pwd)/log:/opt/big but if I add this mounting there is error
Docker Error response from daemon: OCI runtime create failed
Is there any way to get this log file out?
try to copy using:
docker cp CONTAINER_NAME:/opt/big $(pwd)/log
I am using windows 10 machine, with Docker on windows, and pulled cloudera-quickstart:latest image. while trying to run it, I am getting into below error.
can someone please suggest.
docker: Error response from daemon: oci runtime error: container_linux.go:262: starting container process caused "exec: \"/usr/bin/docker-quickstart\": stat /usr/bin/docker-quickstart: no such file or directory"
my run command:
docker run --hostname=quickstart.cloudera --privileged=true -t -i cloudera/quickstart /usr/bin/docker-quickstart
The issue was that I have download docker separately and created the image with this command, which is not supported in cloudera 5.10 and above.
tar xzf cloudera-quickstart-vm-*-docker.tar.gz
docker import - cloudera/quickstart:latest < cloudera-quickstart-vm--docker/.tar
so I finally removed the docker image and then pulled it properly
docker pull cloudera/quickstart:latest
now docker is properly up and running.
If you had downloaded CDH v5.13 docker image, then the issue might be mostly due to the structure of the image archive; in my case, I found it to be clouder*.tar.gz > cloudera*.tar > cloudera*.tar ! Seems the packaging was done by fault and the official documentation too doesn't capture this :( In which case, just perform one more level of extraction to get to the correct cloudera*.tar archieve. This post from the cloudera forum helped me.
I'm running a Ubuntu 14.04 VC2 & Ubuntu 14.04 Storage Instance from Vultr.com.
I've setup the private network so they see each other.
I mounted the storage instance with sshfs root#x.x.x.x: /storage.
I've installed the docker version from docker.com and removed 'apt-get remove --purge docker.io' version. That solved the FATA issues I was getting.
I copied /var/lib/docker/* to /storage and created a symlink with ln -s /storage /var/lib/docker. Yes I removed /var/lib/docker before the symlink.
I did a 'docker run hello-world' and it worked. However, when I do a 'docker pull ubuntu:14.04' OR 'docker pull myrepo/myimage:1.0' it will download and extract but at the very end I get this error:
failed to register layer: Error processing tar file(exit status 1): lchown /bin/bzcmp: no such file or directory
I've pulled the image to my localhost, also with a symlink, and it works. Also, when I was running purely off a storage instance (extremely slow - don't do it) the docker image did work. So, it's not the image itself.
I've tried searching for this error and can't find anything about it. Any idea what's going on?
I have created a docker image which is a python script based on a centos image. This image is working in the host system. Then I converted that image in tar.gz format. After that when I imported that tar.gz file into docker host(in a ubuntu system), it is done properly and the docker images list shows me the image listed in there. Then I tried to run the container in interactive mode using the following command:
$docker run -it image_name /bin/bash
it throws the following error:
docker: Error response from daemon: invalid header field value "oci runtime error: container_linux.go:247: starting container process caused \"exec: \\\"/bin/bash\\\": stat /bin/bash: no such file or directory\"\n".
Although docker run -it image_name /bin/bash command is working for all other images in my system. I tried almost all the means, but got no output apart from this error.
docker run -it image_name /bin/sh works for me! (Docker image, like Alpine, does not have /bin/bash).
I've just run into the same issue after updating Docker For Windows. It seems that it corrupted some image layers.
I cleared all the cached containers and images by running:
docker ps -qa|xargs docker rm -f
docker images -q|xargs docker rmi
The last command returned a few errors (some returned images didn't exist anymore).
Then I restarted the service and everything was running again.
I had the same issue, and it got resolved, after following the steps described in this post...
https://www.jamescoyle.net/how-to/1512-export-and-import-a-docker-image-between-nodes
Instead of saving the docker image (I) as .tar and importing, we need to commit the exited container, based on the image (I), as new image (N).
Then save the newly committed image (N) as .tar file, for importing into a new environment.
Hope this helps...