Enable experimental docker features on github workflow images - docker

We are trying to enable experimental features on the ubuntu-latest image on github workflows, since would like to use squash to reduce image size. However this is not possible as we get the following error:
/home/runner/work/_temp/59d363d1-0231-4d54-bffe-1e3205bf6bf3.sh: line
3: /etc/docker/daemon.json: Permission denied
for the following workflow:
- name: Build, tag, and push TOING image to Amazon ECR
id: build-image
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
ECR_REPOSITORY: TOING/TOING/TOING_REPO
IMAGE_TAG: TOING_TEST
DOCKER_CLI_EXPERIMENTAL: enabled
run: |
#build and push images
sudo rm -rf /etc/docker/daemon.json
sudo echo '{"experimental": true}' >> /etc/docker/daemon.json
sudo systemctl restart docker
docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG -f core/TOING/Dockerfile .
docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
echo "::set-output name=image::$ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG"
We have verified that the daemon.json file is properly updated, and also used sudo for our commands, as shown.
We have also opened an issue on github regarding this, but have no response so far. I would be greatful for any help.
PS: We have tried both "experimental": true and "experimental": "enabled".

We have verified that the daemon.json file is properly updated
It looks like it's not properly updated, based on your error message:
/home/runner/work/_temp/59d363d1-0231-4d54-bffe-1e3205bf6bf3.sh: line
3: /etc/docker/daemon.json: Permission denied
What's going on here? Well, the sudo command will run the given command as root. But you're doing a shell redirect, which is handled by the shell itself, not by sudo. In other words, you're redirecting the output of sudo.
If you want to write to a file as root then you'll need to actually run a command that writes the file, and then run that using sudo. For example:
echo '{"experimental": true}' | sudo tee -a /etc/docker/daemon.json

This works best for me.
tmp=$(mktemp)
sudo jq '.+{experimental:true}' /etc/docker/daemon.json > "$tmp"
sudo mv "$tmp" /etc/docker/daemon.json
sudo systemctl restart docker.service

Edward Thomson reply is on point however it assumes that the daemon.json file is empty. I've stumbled into my GitHub workflow definition where the file already was present with the object and simply append the {"experimental": true} would yield no benefit.
My quick recommendation is to use sed tool for the work.
sudo sed -i 's/}/,"experimental": true}/' /etc/docker/daemon.json
Here we replace the object closing with our key=value pair and only then close.
For more in-depth explanation, I've replied on the respective GitHub issue found here https://github.com/actions/starter-workflows/issues/336#issuecomment-1213996399.

Related

docker-compose.yml for DECIDIM don't work

I would like to create a docker image of DECIDIM with docker compose, but when I install it on docker, it gives me the following error:
" Error invoking remote method 'docker-start-container': Error: (HTTP
code 400) unexpected - failed to create shim task: OCI runtime create
failed: runc create failed: unable to start container process: exec:
"/code/vendor/hello-world.sh": stat /code/vendor/hello-world.sh: no
such file or directory: unknown"
As you are the user of this application, I ask you directly, how to solve it.
https://github.com/decidim/docker/blob/master/docker-compose.yml
PS: I'm running on windows AND only on windows. I can use powershell and WSL. I propose an issue on Github here: https://github.com/decidim/docker/issues/101
I try to use docker to creat a docker image with docker compose.But it's don't working.
Please try to run the commands, suggested by developer (https://github.com/decidim/docker/issues/101#issuecomment-1418184735), in Git Bash. It's usually installed along with Git for Windows. Then the commands should run fine:
git config --global core.fileMode false
git config --global core.autocrlf true
git clone https://github.com/decidim/docker.git
find ./docker -type f -print0 | xargs -0 -n 1 -P 4 dos2unix
The answer of the dev is:
The error you are facing is windows related.
Try to run the following set of commands in a CMD terminal to configure git on your windows:
git config --global core.fileMode false
git config --global core.autocrlf true
After that :
git clone https://github.com/decidim/docker.git
find ./docker -type f -print0 | xargs -0 -n 1 -P 4 dos2unix
After that, everything should be okay, so you could run:
cd docker
docker compose up
but xargs and dos2unix are no compatible with windows command prompt.

Run docker load inside RPM file

I'm trying to do an offline deployment of a docker image with RPM on CentOS.
My spec file is pretty simple :
Source1: myimage.tar.gz
...
%install
cp %{SOURCE1} ...
...
%post
docker load -i myimage.tar.gz
docker-compose up -d
docker image prune -af
I compress my image using docker save and gzip. Then, on another machine, I just load the image with docker and use docker-compose to run my service.
When executing the commands "docker load" and "docker-compose up", I got that error:
sudo: unable to execute /bin/docker: Permission denied
sudo: unable to execute /bin/docker-compose: Permission denied
sudo: unable to execute /bin/docker: Permission denied
My user is part of the docker group, I checked if the RPM file was executed using root, it is...
If I run the RPM on my dev machine, it works, if I execute the commands in a script that is not part of the RPM, it works...
Any ideas ?
Thanks in advance.
You're probably being blocked by SELinux. You can temporarily disable it to check with setenforce 0.
If that is the problem (it is; this is a comment turned into an answer), some possible solutions:
You might be able to use audit2allow to change the denials into new rules to import.
Maybe udica will help. I don't know enough about it to tell.
I tried the first solution and it worked ! grep rpm_script_t /var/log/audit/audit.log | audit2allow -m mypolicy > mypolicy.te
The problem came from the fact that the RPM scripts didn't have the access to the container_runtime_exec_t:file entrypoint that I suppose, allow it to run containers like docker.
Thanks a lot for the tip !

`PAM: Authentication failure` when running `chpasswd` on Alpine Linux

I am running Alpine Linux like this:
$ docker run --rm -it alpine sh
Then running the following commands:
/ # apk add shadow
/ # /usr/sbin/useradd -m -u 1000 jenkins
Creating mailbox file: No such file or directory
/ # echo "jenkins:mypassword" | chpasswd
Password: chpasswd: PAM: Authentication failure
According to this, the warning Creating mailbox file: No such file or directory can be safely ignored.
My problem is that chpasswd is failing with the vague error message seen in the last line. I tried the exact commands on CenstOS and Ubuntu and it worked there.
This turned out to be a bug in Alpine 3.6+. A new pull request is supposed to have fixed this as mentioned here: https://bugs.alpinelinux.org/issues/10209
Are you sure the root account is enable ?
This might be a consequence of this change: https://github.com/alpinelinux/aports/commit/72c7a7a3caf28c06289dc5f65e1756b38cfb00ca

how to change root dir of docker on ubuntu 18.04 LTS? (docker change location of volumes)

I installed ubuntu 18.04 LTS and checked a setting for docker (17.06.2-ce) to install at the same time.
I tested by starting the hello-world (sudo docker run hello-world) :
[...]
Hello from Docker!
This message shows that your installation appears to be working correctly.
[...]
I mounted a software raid on the folder named /raid, and make a folder /docker-data in it.
I try to change the root dir of my docker to put it in /raid/docker-data/ by following the few tutorials on the network... in vain.
these solutions don't work either :
/etc/default/docker : I can't find this
As in the 2nd solution : docker can't find his folder.
Docker Root Dir: /var/snap/docker/common/var-lib-docker
Has anyone managed to do this feat in recent months?
(this is my 3rd installation of ubuntu and I just broke it...)
Apparently on Ubuntu 18.04 LTS, docker 17.06.2-it needs to work with snap, I'm going to dig this way. I will try to return answer later...
The common solution is to move the data and create a symlink:
systemctl stop docker
mv /var/lib/docker /raid/docker-data
ln -s /raid/docker-data /var/lib/docker
systemctl start docker
You can also tell docker about the new location with a setting in /etc/docker/daemon.json. If you don't have this file, you could create one with the contents:
{
"data-root": "/raid/docker-data"
}
I would recommend the first solution since you will find many 3rd party tools expect docker to be located in /var/lib/docker.
Sorry for this late response.
to come to my problem, after having looked at it more closely:
I am on Ubuntu 18.04, I can add or remove the docker service only via snap install (or remove) docker.
a docker party installs in /var/snap/
so I transpose your solution like this:
mv /var/snap/ /raid/snap
ln -s /raid/snap /var/snap
and finally I install docker via snap install docker
I have a test with sudo docker info, and I have this error message that appears :
cannot perform operation: mount --rbind /var/snap /tmp/snap.rootfs_RRAjdq//var/snap: Permission denied
(snap.rootfs_* because the end does not stop changing at each command launch)
and yet the installation went well some docker is apparently of course on /raid/snap.
I come back to you to give you the solution that allowed me to solve this problem.
cannot perform operation: mount --rbind /var/snap /tmp/snap.rootfs_RRAjdq//var/snap: Permission denied
I know why : https://bugs.launchpad.net/snapcraft/+bug/1620771 :
When /home is a symlink snaps don't work.
When /home is a real directory snaps work, see output below
In my case :
When /raid/snap is a symlink snaps don't work.
When /var/snap is a real directory snaps work.
I deleted docker. I had to reinstall snapcraf (snapd) because I was left on file modifications of it (wrong way)
from there, I stopped the snapd service:
sudo mv /var/snap/ /raid/snap
sudo mount --rbind /raid/snap /var/snap
I started the snapd service.
sudo snap install docker
sudo docker info <= to test
sudo docker run hello-world <= to test
I fixed my mount on fstab:
/raid/snap /var/snap none bind
I restarted my OS : it worked, at least for my case. (I checked all along this file consistency handling to see if the docker files was going well on the raid...)
Change Docker root storage (data path):
run this command to find docker data path:
$ sudo docker info | grep -i root
default path:
root#user-testing-HP-ProBook-4540s:/etc/docker# docker info | grep -i root
Root Dir: /var/lib/docker/aufs
WARNING: No swap limit support
Docker Root Dir: /var/lib/docker
first, stop the docker:
sudo service docker stop
copy the corrent data path to new path:
sudo rsync -aqxP /var/lib/docker /data/docker-data/
add the following on (/etc/docker/daemon.json) file:
(if the file is not there create the file with vim or your fav editor(sudo vim /etc/docker/deamon.json) )
{
"data-root": "/data/docker-data/docker"
}
conform with cat command:
cat /etc/docker/deamon.json
output will be like this:
root#user-testing-HP-ProBook-4540s:/home/user/Downloads# cat /etc/docker/daemon.json
{
"data-root": "/data/docker-data/docker"
}
root#user-testing-HP-ProBook-4540s:/home/user/Downloads#
start docker:
sudo service docker start
check the root (data path) path now:
$ sudo docker info | grep -i root
out put will be like this:
root#user-testing-HP-ProBook-4540s:/home/user/Downloads# sudo docker info | grep -i root
Root Dir: /data/docker-data/docker/aufs
WARNING: No swap limit support
Docker Root Dir: /data/docker-data/docker
root#user-testing-HP-ProBook-4540s:/home/user/Downloads#

Docker: permission denied while trying to connect to Docker Daemon with local CircleCI build

I have a very simple config.yml:
version: 2
jobs:
build:
working_directory: ~/app
docker:
- image: circleci/node:8.4.0
steps:
- checkout
- run: node -e "console.log('Hello from NodeJS ' + process.version + '\!')"
- run: yarn
- setup_remote_docker
- run: docker build .
All it does: boot a node image, test if node is running, do a yarn install and a docker build.
My dockerfile is nothing special; it has a COPY and ENTRYPOINT.
When I run circleci build on my MacBook Air using Docker Native, I get the following error:
Got permission denied while trying to connect to the Docker daemon socket at unix://[...]
If I change the docker build . command to: sudo docker build ., everything works as planned, locally, with circleci build.
However, pushing this change to CircleCI will result in an error: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
So, to summarize: using sudo works, locally, but not on CircleCI itself. Not using sudo works on CircleCI, but not locally.
Is this something the CircleCI staff has to fix, or is there something I can do?
For reference, I have posted this question on the CircleCI forums as well.
I've created a workaround for myself.
In the very first step of the config.yml, I run this command:
if [[ $CIRCLE_SHELL_ENV == *"localbuild"* ]]; then
echo "This is a local build. Enabling sudo for docker"
echo sudo > ~/sudo
else
echo "This is not a local build. Disabling sudo for docker"
touch ~/sudo
fi
Afterwards, you can do this:
eval `cat ~/sudo` docker build .
Explanation:
The first snippet checks if the CircleCI-provided environment variable CIRCLE_SHELL_ENV contains localbuild. This is only true when running circleci build on your local machine.
If true, it creates a file called sudo with contents sudo in the home directory.
If false, it creates a file called sudo with NO contents in the home directory.
The second snippet opens the ~/sudo file, and executes it with the arguments you give afterwards. If the ~/sudo file contains "sudo", the command in this example will become sudo docker build ., if it doesn't contain anything, it will become docker build ., with a space before it, but that will be ignored.
This way, both the local (circleci build) builds and remote builds will work.
To iterate on the answer of Jeff Huijsmans,
an alternative version is to use a Bash variable for docker:
- run:
name: Set up docker
command: |
if [[ $CIRCLE_SHELL_ENV == *"localbuild"* ]]; then
echo "export docker='sudo docker'" >> $BASH_ENV
else
echo "export docker='docker'" >> $BASH_ENV
fi
Then you can use it in your config
- run:
name: Verify docker
command: $docker --version
You can see this in action in my test for my Dotfiles repository
Documentation about environment variables in CircleCi
You might also solve your issue by running the docker image as root. Specify user: root under the image parameter:
...
jobs:
build:
working_directory: ~/app
docker:
- image: circleci/node:8.4.0
user: root
steps:
- checkout
...
...

Resources