I am trying to build a multiarch manifest with Cirrus CI, so I need to enable the docker experimental option
But the experimental option of docker is not taking into account.
In the .cirrusci.yml I have something like:
publish_docker_builder:
script: |
mkdir -p $HOME/.docker
echo '{ "experimental": "enabled" }' > $HOME/.docker/config.json
docker info
docker login --username=$DOCKERHUB_USER --password=$DOCKERHUB_PASS
docker manifest create --amend $CIRRUS_REPO_FULL_NAME:latest $CIRRUS_REPO_FULL_NAME:linux $CIRRUS_REPO_FULL_NAME:rpi $CIRRUS_REPO_FULL_NAME:windows
But the execution reports :
mkdir -p $HOME/.docker
echo '{ "experimental": "enabled" }' > $HOME/.docker/config.json
....
Labels:
Experimental: false
....
docker manifest create is only supported on a Docker cli with experimental cli features enabled
The full log is https://api.cirrus-ci.com/v1/task/6577836603736064/logs/main.log
Is this a limitation on dockerd available in Cirrus CI or I made some wrong configuration ?
The docker cli seems to have changed the way to enable experimental feature cli
DOCKER_CLI_EXPERIMENTAL Enable experimental features for the cli (e.g.
enabled or disabled)
Adding to the .cirrusci.yml :
env:
DOCKER_CLI_EXPERIMENTAL: enabled
Related
We are trying to enable experimental features on the ubuntu-latest image on github workflows, since would like to use squash to reduce image size. However this is not possible as we get the following error:
/home/runner/work/_temp/59d363d1-0231-4d54-bffe-1e3205bf6bf3.sh: line
3: /etc/docker/daemon.json: Permission denied
for the following workflow:
- name: Build, tag, and push TOING image to Amazon ECR
id: build-image
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
ECR_REPOSITORY: TOING/TOING/TOING_REPO
IMAGE_TAG: TOING_TEST
DOCKER_CLI_EXPERIMENTAL: enabled
run: |
#build and push images
sudo rm -rf /etc/docker/daemon.json
sudo echo '{"experimental": true}' >> /etc/docker/daemon.json
sudo systemctl restart docker
docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG -f core/TOING/Dockerfile .
docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
echo "::set-output name=image::$ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG"
We have verified that the daemon.json file is properly updated, and also used sudo for our commands, as shown.
We have also opened an issue on github regarding this, but have no response so far. I would be greatful for any help.
PS: We have tried both "experimental": true and "experimental": "enabled".
We have verified that the daemon.json file is properly updated
It looks like it's not properly updated, based on your error message:
/home/runner/work/_temp/59d363d1-0231-4d54-bffe-1e3205bf6bf3.sh: line
3: /etc/docker/daemon.json: Permission denied
What's going on here? Well, the sudo command will run the given command as root. But you're doing a shell redirect, which is handled by the shell itself, not by sudo. In other words, you're redirecting the output of sudo.
If you want to write to a file as root then you'll need to actually run a command that writes the file, and then run that using sudo. For example:
echo '{"experimental": true}' | sudo tee -a /etc/docker/daemon.json
This works best for me.
tmp=$(mktemp)
sudo jq '.+{experimental:true}' /etc/docker/daemon.json > "$tmp"
sudo mv "$tmp" /etc/docker/daemon.json
sudo systemctl restart docker.service
Edward Thomson reply is on point however it assumes that the daemon.json file is empty. I've stumbled into my GitHub workflow definition where the file already was present with the object and simply append the {"experimental": true} would yield no benefit.
My quick recommendation is to use sed tool for the work.
sudo sed -i 's/}/,"experimental": true}/' /etc/docker/daemon.json
Here we replace the object closing with our key=value pair and only then close.
For more in-depth explanation, I've replied on the respective GitHub issue found here https://github.com/actions/starter-workflows/issues/336#issuecomment-1213996399.
I have the following script thats run in my jenkins job
set +x
SERVICE_ACCOUNT=`cat "$GCLOUD_AUTH_FILE"`
docker login -u _json_key -p "${SERVICE_ACCOUNT}" https://gcr.io
set -x
docker pull gcr.io/$MYPROJECT/automation:master
docker run --rm --attach STDOUT -v "$(pwd)":/workspace -v "$GCLOUD_AUTH_FILE":/gcloud-auth/service_account_key.json -v /var/run/docker.sock:/var/run/docker.sock -e "BRANCH=master" -e "PROJECT=myproject" gcr.io/myproject/automation:master "/building/buildImages.sh" "myapp"
if [ $? -ne 0 ]; then
exit 1
fi
I am now trying to do this in cloudbuild.yaml such that I can run my script using my own automation image (which has a bunch of dependencies docker/jdk/pip etc installed) , and mount my git folders in my workspace directory
I tried putting my cloudbuild.yaml at the top level in my directory in my git repo and set it up as this
steps:
- name: 'gcr.io/myproject/automation:master'
volumes:
- name: 'current-working-dir'
path: /mydirectory
args: ['bash', '-c','/building/buildImages.sh', 'myapp']
timeout: 4000s
But this gives me errors saying the
invalid build: Volume "current-working-dir" is only used by one step
Just FYI, my script buildImages.sh, copies folders and dockerfiles, runs pip install/ npm/ and gradle commands and then docker build commands (kind of all in one solution).
Whats the way to translate my script to cloudbuild.yaml
try this in your cloudbuild.yaml:
steps:
- name: 'gcr.io/<your-project>/<image>'
args: ['sh','<your-script>.sh']
using this I was able to pull the image from Google Cloud Registry that has my script, then run the script using 'sh'. It didn't matter where the script is. I'm using alpine in my Dockerfile as base image.
So I'm trying to setup my Gitlab CI to trigger a job on git push to build and deploy my Docker. This is the .gitlab-ci.yml file I'm using based on an example from Gitlab docs (Elixir yml).
stages:
- build
build:
before_script:
- docker build -f Dockerfile.build -t ci-project-build-$CI_PROJECT_ID:$CI_BUILD_REF .
- docker create
-v /build/deps
-v /build/_build
-v /build/rel
-v /root/.cache/aceapp/
--name build_data_$CI_PROJECT_ID_$CI_BUILD_REF busybox /bin/true
tags:
- docker
stage: build
script:
- docker run --volumes-from build_data_$CI_PROJECT_ID_$CI_BUILD_REF --rm -t ci-project-build-$CI_PROJECT_ID:$CI_BUILD_REF
The output when pushing to GitLab instance is this:
Running with gitlab-runner 10.7.2 (b5e03c94)
on my.host.rhel.runner 8f724ea7
Using Shell executor...
Running on my.host.local...
Fetching changes...
HEAD is now at 14351c4 Merge branch 'Development' into 'master'
From https://my.host.example/zalmosc/ace-app
14351c4..9fa2d43 master -> origin/master
Checking out 9fa2d435 as master...
Skipping Git submodules setup
$ # Auto DevOps variables and functions # collapsed multi-line command
$ setup_docker
$ build
Logging to GitLab Container Registry with CI credentials...
Login Succeeded
Building Dockerfile-based application...
invalid argument "/master:9fa2d4358e6c426b882e2251aa5a49880013614b" for t: Error parsing reference: "/master:9fa2d4358e6c426b882e2251aa5a49880013614b" is not a valid repository/tag: invalid reference format
See 'docker build --help'.
ERROR: Job failed: exit status 1
I understand the docker tag is not valid (is the before_script: really triggered based on the name?), and I'm looking for help regarding a) a solution b) how I can learn more about the requirements for a pipeline that builds docker based on default settings. Do I need to tag my docker image locally and then somehow add this to my git commit?
The thing is -t is to tag your Docker image. See the docs here.
The tag should be formated like name:version, and you giving it /master:9fa2d4358e6c426b882e2251aa5a49880013614b which is not a valid tag. You could try to delete the / before master
Your tag cannot begin with '/':
$ docker build -f Dockerfile.build -t /master:9fa2d4358e6c426b882e2251aa5a49880013614b .
invalid argument "/master:9fa2d4358e6c426b882e2251aa5a49880013614b" for "-t, --tag" flag: invalid reference format
See 'docker build --help'.
# remove '/'
$ docker build -f Dockerfile.build -t master:9fa2d4358e6c426b882e2251aa5a49880013614b .
Sending build context to Docker daemon 3.584kB
Step 1/3 : FROM ubuntu:16.04
---> 14f60031763d
...
If you are not using the built in registry, you might have to set the CI_REGISTRY_IMAGE value to something. It seems that if you don't se this it gets set to /master and causes this error. you can set this in the CI setting page, or when making a new pipeline. e.g CI_REGISTRY_IMAGE gitlab.com/user/project
I would like to use this docker container apiaryio/dredd instead of the npm package dredd. I am not familiar with running and debugging npm based docker images. How can I run the basic usage example of the npm package "Quick Start" section
$ dredd init
$ dredd
if I have a Swagger file instead of the api-description.apib in $PWD/api/api.yaml or $PWD/api/api.json?
TL;DR
Run dredd image as a command line. Dredd Image at Docker Hub
docker run -it -v $PWD:/api -w /api apiaryio/dredd init
[Optional] Turn it into a script:
#!/bin/bash
echo '***'
echo 'Root dir is /api'
export MYIP=`ifconfig | sed -En 's/127.0.0.1//;s/.*inet (addr:)?(([0-9]*\.){3}[0-9]*).*/\2/p'`
echo 'Host ip is: ' $MYIP
echo 'Configure URL of tested API endpoint: http://api-srv::<enpoint-port>. Set api-srv to point to your server.'
echo 'This script will set api-srv to docker host machine - ' $MYIP
echo '***'
docker run -it --add-host "api-srv:$MYIP" -v $PWD:/api -w /api apiaryio/dredd dredd $1
[Optional] And put this script in a folder that is in your PATH variable and create an alias for short it
alias dredd='bash ./scripts/dredd.sh'
Code at Github gist.
I have a very simple config.yml:
version: 2
jobs:
build:
working_directory: ~/app
docker:
- image: circleci/node:8.4.0
steps:
- checkout
- run: node -e "console.log('Hello from NodeJS ' + process.version + '\!')"
- run: yarn
- setup_remote_docker
- run: docker build .
All it does: boot a node image, test if node is running, do a yarn install and a docker build.
My dockerfile is nothing special; it has a COPY and ENTRYPOINT.
When I run circleci build on my MacBook Air using Docker Native, I get the following error:
Got permission denied while trying to connect to the Docker daemon socket at unix://[...]
If I change the docker build . command to: sudo docker build ., everything works as planned, locally, with circleci build.
However, pushing this change to CircleCI will result in an error: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
So, to summarize: using sudo works, locally, but not on CircleCI itself. Not using sudo works on CircleCI, but not locally.
Is this something the CircleCI staff has to fix, or is there something I can do?
For reference, I have posted this question on the CircleCI forums as well.
I've created a workaround for myself.
In the very first step of the config.yml, I run this command:
if [[ $CIRCLE_SHELL_ENV == *"localbuild"* ]]; then
echo "This is a local build. Enabling sudo for docker"
echo sudo > ~/sudo
else
echo "This is not a local build. Disabling sudo for docker"
touch ~/sudo
fi
Afterwards, you can do this:
eval `cat ~/sudo` docker build .
Explanation:
The first snippet checks if the CircleCI-provided environment variable CIRCLE_SHELL_ENV contains localbuild. This is only true when running circleci build on your local machine.
If true, it creates a file called sudo with contents sudo in the home directory.
If false, it creates a file called sudo with NO contents in the home directory.
The second snippet opens the ~/sudo file, and executes it with the arguments you give afterwards. If the ~/sudo file contains "sudo", the command in this example will become sudo docker build ., if it doesn't contain anything, it will become docker build ., with a space before it, but that will be ignored.
This way, both the local (circleci build) builds and remote builds will work.
To iterate on the answer of Jeff Huijsmans,
an alternative version is to use a Bash variable for docker:
- run:
name: Set up docker
command: |
if [[ $CIRCLE_SHELL_ENV == *"localbuild"* ]]; then
echo "export docker='sudo docker'" >> $BASH_ENV
else
echo "export docker='docker'" >> $BASH_ENV
fi
Then you can use it in your config
- run:
name: Verify docker
command: $docker --version
You can see this in action in my test for my Dotfiles repository
Documentation about environment variables in CircleCi
You might also solve your issue by running the docker image as root. Specify user: root under the image parameter:
...
jobs:
build:
working_directory: ~/app
docker:
- image: circleci/node:8.4.0
user: root
steps:
- checkout
...
...