I'm trying to do an offline deployment of a docker image with RPM on CentOS.
My spec file is pretty simple :
Source1: myimage.tar.gz
...
%install
cp %{SOURCE1} ...
...
%post
docker load -i myimage.tar.gz
docker-compose up -d
docker image prune -af
I compress my image using docker save and gzip. Then, on another machine, I just load the image with docker and use docker-compose to run my service.
When executing the commands "docker load" and "docker-compose up", I got that error:
sudo: unable to execute /bin/docker: Permission denied
sudo: unable to execute /bin/docker-compose: Permission denied
sudo: unable to execute /bin/docker: Permission denied
My user is part of the docker group, I checked if the RPM file was executed using root, it is...
If I run the RPM on my dev machine, it works, if I execute the commands in a script that is not part of the RPM, it works...
Any ideas ?
Thanks in advance.
You're probably being blocked by SELinux. You can temporarily disable it to check with setenforce 0.
If that is the problem (it is; this is a comment turned into an answer), some possible solutions:
You might be able to use audit2allow to change the denials into new rules to import.
Maybe udica will help. I don't know enough about it to tell.
I tried the first solution and it worked ! grep rpm_script_t /var/log/audit/audit.log | audit2allow -m mypolicy > mypolicy.te
The problem came from the fact that the RPM scripts didn't have the access to the container_runtime_exec_t:file entrypoint that I suppose, allow it to run containers like docker.
Thanks a lot for the tip !
Related
I have only started using Docker and was trying to follow the documentation on the official website... Everything was going smoothly until I got to this point.
In step 3:
Upon running the command, I get this error -> ls: cannot access 'C:/Program Files/Git/': No such file or directory.
I thought it was not that big of a deal so I went ahead and skipped to the following parts of the tutorial.
Then I came across the same error in this part:
I tried to locate the directory on my PC manually and found a remote git repository, but the commands still don't work for me. These were the commands that I have tried and their corresponding errors:
docker run -it ubuntu ls / - No such file or directory
cd /path/to/getting-started/app - No such file or directory
docker run -dp 3000:3000 ` -w /app -v "$(pwd):/app" ` node:12-alpine ` sh -c "yarn install && yarn run dev" - docker: Error response from daemon: the working directory 'C:/Program Files/Git/app' is invalid, it needs to be an absolute path.
See 'docker run --help'. (this error was after changing to the directory I manually searched on my PC)
I'm unsure if I have to set a PATH??? I don't think I have missed any of the steps provided in the earlier tutorials.
Thanks, guys! I was indeed using git bash on VSCode. I tried running it on my Windows terminal via ubuntu and now, everything's working fine. Thanks, Max, and Spears. Exactly what I was having issues with.
These comments helped me resolve the issue:
Maybe this is your problem github.com/docker-archive/toolbox/issues/673 –
Max
Sounds like you are using the git bash which comes packages with git scm for >windows. I strongly recommend to avoid this and switch to WSL2. The git bash >is NOT the kind of shell you are looking for when using docker due to missing >libs and nasty side effects which are mostly very hard to debug. - Spears
I am trying to download a docker image called anchor-engine found at the following link: https://hub.docker.com/r/anchore/anchore-engine/
For ease, I will post a copy of the code used to create the image and get it running as they have specified.
Here is a link to the image, I tried posting the image, but it requires reputation 10.
The issue I am having is specifically on this line of the download:
docker cp ae:/docker-compose.yaml ~/aevolume/docker-compose.yaml
I get the following error message from my terminal:
Error: No such container:path: ae:/docker-compose.yaml
My question is how do I fix this?
I am not good at $PATH.
After echoing $PATH with echo "$PATH", I just see a very messy blob of path and I do not really know how to create the necessary container in the directory specified, which was the first line mkdir ~/aevolume.
The thing is it specifically requires a container and when I type ls, it gives a blank response.
Please help and thanks.
The lines I have been able to run are:
mkdir ~/aevolume
cd ~/aevolume
docker pull docker.io/anchore/anchore-engine:latest
docker create --name ae docker.io/anchore/anchore-engine:latest
but when I try running
docker cp ae:/docker-compose.yaml ~/aevolume/docker-compose.yaml
As I have done in this following line:
aevolume admin$ docker cp ae:/docker-compose.yaml ~/aevolume/docker-compose.yaml
I get this response:
Error: No such container:path: ae:/docker-compose.yaml
It seems like there's an issue with the latest version of the Anchore image that was updated 14 days ago. I've been experiencing the same issue. I went ahead and used version 0.7.0 for the commands and it worked fine:
mkdir ~/aevolume
cd ~/aevolume
docker pull anchore/anchore-engine:v0.7.0
docker create --name ae anchore/anchore-engine:v0.7.0
docker cp ae:/docker-compose.yaml ~/aevolume/docker-compose.yaml
Hopefully this helps, I was stuck on it for a bit haha.
#Sandeep Kumar There is a correction in the last command.
docker cp ae:/docker-compose.yaml ~/aevolume/docker-compose.yaml
If you are unable to get docker-compose.yaml for anchore, try this:
curl https://docs.anchore.com/current/docs/engine/quickstart/docker-compose.yaml > docker-compose.yaml
Here is the reference:
https://docs.anchore.com/current/docs/engine/quickstart/
It worked well for myself after I ran into the same issue as described above.
I am testing how to install a sample network for Hyperledger Fabric. when I was running 'byfn.sh' inside the 'first-network' project, the console will prompt such error:
/bin/bash: scripts/script.sh: Permission denied
it looks like this line produces above error:
docker exec cli scripts/script.sh $CHANNEL_NAME $CLI_DELAY $LANGUAGE $CLI_TIMEOUT $VERBOSE
I used root user, and have used chmod +x or u+x to change all *.sh files permission. but doesn't work, so any ideas? many thanks!!
Finally, I solved this problem by myself.
use Set SELinux mode as permissive:
$ setenforce 0
$ docker exec -it cli bash
the detailed solution is below:
https://nanxiao.me/en/selinux-cause-permission-denied-issue-in-using-docker/
I have no idea why it happens, but I hope if it can help someone.
Use sudo when running the command, will solve the issue.
I'm trying to cache things that my gradle build download each time currently. For that I try to mount a volume with the -v option like -v gradle_cache:/root/.gradle
The thing is each time I rerun the build with the exat same command it still downloads everything again. The full command I use to run the image is
sudo docker run --rm -v gradle_cache:/root/.gradle -v "$PWD":/home/gradle/project -w /home/gradle/project gradle:jdk8-alpine gradle jar
I also checked in the directory where docker saves the volumes content at /var/lib/docker/volumes/gradle_cache/_data but that is also empty.
my console log
What am I missing to make this working?
Edit: As per request I rerun the command with the --scan option.
And also with a diffrent gradle home:
$ sudo docker run --rm -v gradle_cache:/root/.gradle -v "$PWD":/home/gradle/project -w /home/gradle/project gradle:jdk8-alpine gradle jar --gradle-user-home /root/.gradle
FAILURE: Build failed with an exception.
* What went wrong:
Failed to load native library 'libnative-platform.so' for Linux amd64.
After looking at the Dockerfile for the Container I'm using I found out, that the right option to use is -v gradle_cache:/home/gradle/.gradle.
What made me think that the files were cached in /root/.gradle is that the Dockerfile also sets that up as a symlink from /home/gradle/.gradle:
ln -s /home/gradle/.gradle /root/.gradle
So inspecting the filesystem after a build made it look like the files were stored there.
Since 6.2.1, Gradle now supports a shared, read-only dependency cache for this scenario:
It’s a common practice to run builds in ephemeral containers. A container is typically spawned to only execute a single build before it is destroyed. This can become a practical problem when a build depends on a lot of dependencies which each container has to re-download. To help with this scenario, Gradle provides a couple of options:
copying the dependency cache into each container
sharing a read-only dependency cache between multiple containers
https://docs.gradle.org/current/userguide/dependency_resolution.html#sub:ephemeral-ci-cache describes the steps to create and use the shared cache.
Alternatively to have more control on the cache directory you can use this:
ENV GRADLE_USER_HOME /path/to/custom/cache/dir
VOLUME $GRADLE_USER_HOME
I have a rule in my Makefile. Within this rule I need to manipulate some docker specific things so I need to get the id of the container in a portable way. In addition, I am using Docker Compose. Here is what I have that doesn't work.
a-rule: some deps
$(shell uuid="$(docker-compose ps -q myService)" docker cp "$$uuid":/a/b/c .)
I receive no errors or output, but I do not get a successful execution.
My goal is to get the uuid of the container that myService is running in and then use that uuid to copy a file from the container to my docker host.
edit:
the following works, but I'm still wondering if its possible to do inline variable settings
uuid=$(shell docker-compose ps -q myService)
a-rule: some deps
docker cp "$(uuid)":/a/b/c .
I ran into the same problem and realised that makefiles take output from shell variables with the use of $$. So I tried that and this should work for you:
a-rule: some deps
uuid=$$(docker-compose ps -q myService);\
docker cp "$$uuid":/a/b/c .
Bit late to the party but I hope this helps someone.