No container found in path - DOCKER - docker

I am trying to download a docker image called anchor-engine found at the following link: https://hub.docker.com/r/anchore/anchore-engine/
For ease, I will post a copy of the code used to create the image and get it running as they have specified.
Here is a link to the image, I tried posting the image, but it requires reputation 10.
The issue I am having is specifically on this line of the download:
docker cp ae:/docker-compose.yaml ~/aevolume/docker-compose.yaml
I get the following error message from my terminal:
Error: No such container:path: ae:/docker-compose.yaml
My question is how do I fix this?
I am not good at $PATH.
After echoing $PATH with echo "$PATH", I just see a very messy blob of path and I do not really know how to create the necessary container in the directory specified, which was the first line mkdir ~/aevolume.
The thing is it specifically requires a container and when I type ls, it gives a blank response.
Please help and thanks.
The lines I have been able to run are:
mkdir ~/aevolume
cd ~/aevolume
docker pull docker.io/anchore/anchore-engine:latest
docker create --name ae docker.io/anchore/anchore-engine:latest
but when I try running
docker cp ae:/docker-compose.yaml ~/aevolume/docker-compose.yaml
As I have done in this following line:
aevolume admin$ docker cp ae:/docker-compose.yaml ~/aevolume/docker-compose.yaml
I get this response:
Error: No such container:path: ae:/docker-compose.yaml

It seems like there's an issue with the latest version of the Anchore image that was updated 14 days ago. I've been experiencing the same issue. I went ahead and used version 0.7.0 for the commands and it worked fine:
mkdir ~/aevolume
cd ~/aevolume
docker pull anchore/anchore-engine:v0.7.0
docker create --name ae anchore/anchore-engine:v0.7.0
docker cp ae:/docker-compose.yaml ~/aevolume/docker-compose.yaml
Hopefully this helps, I was stuck on it for a bit haha.

#Sandeep Kumar There is a correction in the last command.
docker cp ae:/docker-compose.yaml ~/aevolume/docker-compose.yaml

If you are unable to get docker-compose.yaml for anchore, try this:
curl https://docs.anchore.com/current/docs/engine/quickstart/docker-compose.yaml > docker-compose.yaml
Here is the reference:
https://docs.anchore.com/current/docs/engine/quickstart/
It worked well for myself after I ran into the same issue as described above.

Related

Docker Tutorial Unclear: "Persisting our DB" and "Using Bind Mounts"

I have only started using Docker and was trying to follow the documentation on the official website... Everything was going smoothly until I got to this point.
In step 3:
Upon running the command, I get this error -> ls: cannot access 'C:/Program Files/Git/': No such file or directory.
I thought it was not that big of a deal so I went ahead and skipped to the following parts of the tutorial.
Then I came across the same error in this part:
I tried to locate the directory on my PC manually and found a remote git repository, but the commands still don't work for me. These were the commands that I have tried and their corresponding errors:
docker run -it ubuntu ls / - No such file or directory
cd /path/to/getting-started/app - No such file or directory
docker run -dp 3000:3000 ` -w /app -v "$(pwd):/app" ` node:12-alpine ` sh -c "yarn install && yarn run dev" - docker: Error response from daemon: the working directory 'C:/Program Files/Git/app' is invalid, it needs to be an absolute path.
See 'docker run --help'. (this error was after changing to the directory I manually searched on my PC)
I'm unsure if I have to set a PATH??? I don't think I have missed any of the steps provided in the earlier tutorials.
Thanks, guys! I was indeed using git bash on VSCode. I tried running it on my Windows terminal via ubuntu and now, everything's working fine. Thanks, Max, and Spears. Exactly what I was having issues with.
These comments helped me resolve the issue:
Maybe this is your problem github.com/docker-archive/toolbox/issues/673 –
Max
Sounds like you are using the git bash which comes packages with git scm for >windows. I strongly recommend to avoid this and switch to WSL2. The git bash >is NOT the kind of shell you are looking for when using docker due to missing >libs and nasty side effects which are mostly very hard to debug. - Spears

Run docker load inside RPM file

I'm trying to do an offline deployment of a docker image with RPM on CentOS.
My spec file is pretty simple :
Source1: myimage.tar.gz
...
%install
cp %{SOURCE1} ...
...
%post
docker load -i myimage.tar.gz
docker-compose up -d
docker image prune -af
I compress my image using docker save and gzip. Then, on another machine, I just load the image with docker and use docker-compose to run my service.
When executing the commands "docker load" and "docker-compose up", I got that error:
sudo: unable to execute /bin/docker: Permission denied
sudo: unable to execute /bin/docker-compose: Permission denied
sudo: unable to execute /bin/docker: Permission denied
My user is part of the docker group, I checked if the RPM file was executed using root, it is...
If I run the RPM on my dev machine, it works, if I execute the commands in a script that is not part of the RPM, it works...
Any ideas ?
Thanks in advance.
You're probably being blocked by SELinux. You can temporarily disable it to check with setenforce 0.
If that is the problem (it is; this is a comment turned into an answer), some possible solutions:
You might be able to use audit2allow to change the denials into new rules to import.
Maybe udica will help. I don't know enough about it to tell.
I tried the first solution and it worked ! grep rpm_script_t /var/log/audit/audit.log | audit2allow -m mypolicy > mypolicy.te
The problem came from the fact that the RPM scripts didn't have the access to the container_runtime_exec_t:file entrypoint that I suppose, allow it to run containers like docker.
Thanks a lot for the tip !

Why is docker not completely deleting my file?

I am trying to build using:
FROM mcr.microsoft.com/dotnet/core/sdk:2.1 AS builder
COPY pythonnet/src/ pythonnet/src
WORKDIR /pythonnet/src/runtime
RUN dotnet build -f netstandard2.0 -p:DefineConstants=\"MONO_LINUX\;XPLAT\;PYTHON3\;PYTHON37\;UCS4\;NETSTANDARD\" Python.Runtime.15.csproj
# copy myApp csproj and restore
COPY src/myApp/*.csproj /src/myApp/
WORKDIR /src/myApp
RUN dotnet restore
# now copy everything else as separate docker step
# (copy to staging folder, remove csproj, and copy down - so we don't overwrite project above)
WORKDIR /
COPY src/myApp/ ./staging/src/myApp
RUN rm ./staging/src/myApp/*.csproj \
&& cp -r ./staging/* ./ \
&& rm -rf ./staging
This was working fine, and in Windows 10 still does, but in CentOS 7 I get:
Step 10/40 : RUN rm ./staging/src/myApp/*.csproj && cp -r ./staging/* ./ && rm -rf ./staging
---> Running in 6b17ae0fae89
cp: cannot stat './staging/src/myApp/myApp.csproj': No such file or directory
Using ls instead of cp throws a similar file not found error, so it looks like Docker still knows about myApp.csproj but cannot see it since it has been removed.
Is there a way around this? I have tried using rsync but similar problems.
I simply ignored the issue by tacking on ;exit 0 on the offending lines. Not great, but does the job.
EDIT: This worked for me as I cannot upgrade the version of CemtOS. If you can, check out Alexander Block's answer.
I don't know specifically how to solve this problem as there's a lot of context in the filesystem that you haven't (and probably can't) share with us.
My suggestion on a strategy is that you:
comment out all lines from the failing one 'til the end of the Dockerfile
build the partial image
docker exec -it [image] bash to jump into the image
poke around and figure out what's going wrong
repeat 1-4 until things work as expected
It's not as fun as a perfectly insightful answer of course but this is a relentlessly effective algorithm even if it's tedious and annoying.
EDIT
My wild guess is that somehow, someway the linux machine doesn't have the file where it's expected for some reason and so it doesn't get copied into the image at all and that's why the docker build process can't find it. But there's no way to know without debugging the build process.
cp -r will stop and fail with that cannot stat <file> message whenever the source is a symbolic link and the target of the link does not exist. It will not copy links to non-existent files.
So my guess is that after you run COPY src/myApp/ ./staging/src/myApp your file ./staging/src/myApp/myApp.csproj is a symbolic link to a non-existent file. Why the following RUN rm ./staging/src/*.csproj doesn't remove it and stays silent about that, I don't know the answer to that.
To help demonstrate my theory, see below showing cp failing on a symlink on Centos 7.
[547] $ docker run --rm -it centos:7
Unable to find image 'centos:7' locally
7: Pulling from library/centos
524b0c1e57f8: Pull complete
Digest: sha256:e9ce0b76f29f942502facd849f3e468232492b259b9d9f076f71b392293f1582
Status: Downloaded newer image for centos:7
[root#a47b77cf2800 /]# ln -s /tmp/foo /tmp/bar
[root#a47b77cf2800 /]# ls -l /tmp/foo
ls: cannot access /tmp/foo: No such file or directory
[root#a47b77cf2800 /]# ls -l /tmp/bar
lrwxrwxrwx 1 root root 8 Jul 6 05:44 /tmp/bar -> /tmp/foo
[root#a47b77cf2800 /]# cp /tmp/foo /tmp/1
cp: cannot stat '/tmp/foo': No such file or directory
[root#a47b77cf2800 /]# cp /tmp/bar /tmp/2
cp: cannot stat '/tmp/bar': No such file or directory
Notice how you copy reports that it cannot stat either the source or destination of the symbolic link. It's the exact symptom you are seeing.
If you just want to get past this, you can try tar instead of cp or rsync.
Instead of
cp -r ./staging/* ./
use this instead:
tar -C ./staging -cf - . | tar -xf -
tar will happily copy symlinks that don't exist.
You've very likely encountered a kernel bug that has been fixed a long time ago in more recent kernels. As of https://de.wikipedia.org/wiki/CentOS, CentOS 7 is based on the Linux Kernel 3.10, which is pretty old already and does not have good Docker support in regard to the storage backend (overlay filesystem).
CentOS tried to backport needed fixes and features into 3.10, but seems to not have succeeded fully when it comes to overlay support. There are multiple (slightly different) issues regarding this which you can find when searching for "CentOS 7 overlay driver" on the internet. All of them have in common that removing of files from parent overlays does not work as expected.
For me it looks like rm calls on files return success, even though the files are not fully removed. Directory listings (e.g. by ls or shell expansion as in your case) then still list the file, while accessing the file then fails (no matter if read, write or deletion of the file).
I assume that what you've seen is just another incarnation of these issues. You should either switch to CentOS 8 or upgrade your Kernel (which is not officially supported by CentOS as far as I understand). Or even more radical, switch to a distribution which is used more often in combination with Docker and generally offers more recent Kernels, e.g. Debian or Ubuntu.

dpkg not working the same way when invoked from Dockerfile or within the container

I have a Dockerfile describing a container used to build some libs.
Basically, it looks like this:
FROM debian:stretch-slim
COPY somedebianrepo/*.deb \
/basedir/
RUN dpkg -i /basedir/*.deb
When I build the image, I get :
dpkg: dependency problems prevent configuration of [one of my lib] ... depends on [some other lib] however [some other lib] is not installed
Which may sound obvious... but : when I comment the RUN line :
# RUN dpkg -i /basedir/*.deb
then build the image, start the container, and connect to it, I expected the dpkg command to act the same... But actually, when I launch directly the command works fine with no such error.
root#host$ docker exec -it -u root <mycontainer> bash
root#mycontainer $ dpkg -i /basedir/*.deb
root#mycontainer $ (no error)
I also tried with apt-get install, and also encountered such different behaviors.
Since I am quite newbie with Docker, the answer may be quite obvious... but still, it is not to me! I expected the commands executed through "RUN" to act the same way as if executed from within the container..
So if anyone could point out me where I am wrong, she/he is welcome!
EDIT 1 : I have tried to run apt-get update before the dpkg command, though I did not expect it to work : with no success

Getting Docker Container Id in Makefile to use in another command

I have a rule in my Makefile. Within this rule I need to manipulate some docker specific things so I need to get the id of the container in a portable way. In addition, I am using Docker Compose. Here is what I have that doesn't work.
a-rule: some deps
$(shell uuid="$(docker-compose ps -q myService)" docker cp "$$uuid":/a/b/c .)
I receive no errors or output, but I do not get a successful execution.
My goal is to get the uuid of the container that myService is running in and then use that uuid to copy a file from the container to my docker host.
edit:
the following works, but I'm still wondering if its possible to do inline variable settings
uuid=$(shell docker-compose ps -q myService)
a-rule: some deps
docker cp "$(uuid)":/a/b/c .
I ran into the same problem and realised that makefiles take output from shell variables with the use of $$. So I tried that and this should work for you:
a-rule: some deps
uuid=$$(docker-compose ps -q myService);\
docker cp "$$uuid":/a/b/c .
Bit late to the party but I hope this helps someone.

Resources