Trying to build a docker image using Ansible's community.docker collection where the task takes place on an amd64 host but the docker image build is for arm64 targets.
I have created the following task:
- name: Tag and Push Docker Image {{ project.group }}/{{ project.name }} Version {{ version }} Build {{ build }}
docker_image:
name: '{{ project.group }}/{{ project.name }}'
repository: '{{ registryIpPort }}/{{ project.group }}/{{ project.name }}:{{ version }}-BUILD-{{ build }}'
tag: '{{ version }}'
force_tag: yes
push: yes
source: build
build:
path: /home/ansible/git/{{ project.group }}/{{ project.name }}
pull: no
platform: arm64
However, when I run the task I get error:
"Unsupported parameters for (docker_image) module: platform found in build. Supported parameters include: args, cache_from, container_limits, dockerfile, etc_hosts, http_timeout, network, nocache, path, pull, rm, target, use_config_proxy"
Looking in community.docker.docker_image it says that platform was added in 1.1.0 of community.docker.
My installation is made up as follows:
ansible-galaxy version 2.9.6
community.docker collection version 1.7.0
--- UPDATE ----
I think the error is in the second line:
docker_image:
Which should be:
community.docker.docker_image:
Not sure why this is the case but, prepending community.docker. in that line makes the error goes away.
Related
I'm trying to setup github workflow for building image and pushing it to the registry using redhat-actions actions:
workflow.yaml
name: build-maven-runner
on:
workflow_dispatch:
jobs:
build-test-push:
outputs:
image-url: ${{ steps.push-to-artifactory.outputs.registry-path }}
image-digest: ${{ steps.push-to-artifactory.outputs.digest }}
name: build-job
env:
runner_memorylimit: 2Gi
runner_cpulimit: 2
runs-on: [ linux ]
steps:
- name: Clone
uses: actions/checkout#v2
- name: Pre-Login
# podman-login: requires docker config repo auths
# Error: TypeError: Cannot set property 'some.repo.com' of undefined
mkdir /home/runner/.docker/
cat <<EOT >> /home/runner/.docker/config.json
{
"auths": {
"some.repo.com": {}
}
}
EOT
- name: Login
uses: redhat-actions/podman-login#v1
with:
registry: some.repo.com
username: ${{ secrets.USERNAME }}
password: ${{ secrets.PASSWORD }}
auth_file_path: /tmp/podman-run-1000/containers/auth.json
- name: Build
id: build-image
uses: redhat-actions/buildah-build#v2
with:
image: some-image
tags: latest
containerfiles: ./config/Dockerfile
tls-verify: false
- name: Push
id: push-to-artifactory
uses: redhat-actions/push-to-registry#v2
with:
image: ${{ steps.build-image.outputs.image }}
tags: latest
registry: some.other.repo.com/project
username: ${{ secrets.USERNAME }}
password: ${{ secrets.PASSWORD }}
tls-verify: false
./config/Dockerfile
FROM .../openshift/origin-cli:4.10
USER root
RUN sudo yum update -y
RUN sudo yum install -y maven
RUN maven -version
RUN oc version
But Build step failing resulting in:
/usr/bin/buildah version
Version: 1.22.3
Go Version: go1.15.2
Image Spec: 1.0.1-dev
Runtime Spec: 1.0.2-dev
CNI Spec: 0.4.0
libcni Version:
image Version: 5.15.2
Git Commit:
Built: Thu Jan 1 00:00:00 1970
OS/Arch: linux/amd64
Overriding storage mount_program with "fuse-overlayfs" in environment
Performing build from Containerfile
/usr/bin/buildah bud -f /runner/_work/some-project/some-project/config/Dockerfile --format docker --tls-verify=false -t some-image:latest /runner/_work/some-project/some-project
chown /home/runner/.local/share/containers/storage/overlay/l: operation not permitted
time="2022-12-12T16:13:52Z" level=warning msg="failed to shutdown storage: \"chown /home/runner/.local/share/containers/storage/overlay/l: operation not permitted\""
time="2022-12-12T16:13:52Z" level=error msg="exit status 125"
Error: Error: buildah exited with code 125
I'm fairly out of ideas at this point.. I was thinking if it has to do with storage.conf like mentioned here, but even overriding storage.conf still having same error. Originally this how storage.conf looks like:
[storage]
driver = "overlay"
runroot = "/run/containers/storage"
graphroot = "/var/lib/containers/storage"
[storage.options]
additionalimagestores = [
]
[storage.options.overlay]
mountopt = "nodev,metacopy=on"
[storage.options.thinpool]
Does the problem lies deeper like in Dockerfile image ```openshif/origin-cli?
Any help would be appreciated
I ran into this issue today because I was doing some tests locally, typically your CICD should give the correct permissions to your containers (or the workers running your jobs). I fixed this issue by adding the --privileged flag while running my container, I do not recommend using that mode in production unless you are really sure what you are doing. Perhaps not exactly your issue but dropping it here in case it helps someone else.
I am building containers for both amd64 and arm64. Unfortunately, using qemu slows down the build for arm64 so much that it takes longer than 6 hours and I can't use the GitHub Actions free tier. I have an arm64 self-hosted runner, so I setup my docker build with
jobs:
docker:
strategy:
matrix:
os: [ubuntu-latest, [self-hosted, LINUX, ARM64]]
runs-on: ${{ matrix.os }}
timeout-minutes: 480
This spawns two jobs, one for amd64 and arm64. Now during the docker/build-push-action#v3 step the arm64 container (that takes much longer to build) overrides the amd64 container. I would like to avoid having to put the OS/Arch into the container tag. What do I need to change in the docker/build-push-action#v3 config below?
-
name: foo
uses: docker/build-push-action#v3
with:
push: true
file: install_foo/Dockerfile_install
build-args: |
GHTOKEN=${{ secrets.PERSONAL_ACCESS_TOKEN }}
TARGET_UBUNTU=ubuntu22.04
VERSION=${{ env.FOO_VERSION }}
tags: >
BAR/FOO:FOO-devel-${{ env.FOO_VERSION }}-ubuntu22.04,
BAR/FOO:FOO-devel-current-ubuntu22.04,
BAR/FOO:FOO-devel-${{ env.FOO_VERSION }}-ubuntu22.04-${{ env.current_date }}
Thanks!
I'm trying to pull docker image from ECR and deploy it on ec2 instance. However it's throwing an error like
docker pull $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
======END======
err: invalid reference format
2022/11/03 15:31:54 Process exited with status 1
My yml file is:
name: Docker Image CI
on:
push:
branches: [ "main" ]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.TF_USER_AWS_KEY }}
aws-secret-access-key: ${{ secrets.TF_USER_AWS_SECRET }}
aws-region: us-east-1
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login#v1
- name: Build, tag, and push image to Amazon ECR
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
ECR_REPOSITORY: githubactions
IMAGE_TAG: githubactions_image
run: |
docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG .
docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
- name: Docker pull & run from github
uses: appleboy/ssh-action#master
with:
host: ec2-3-86-102-151.compute-1.amazonaws.com
username: ec2-user
key: ${{ secrets.ACTIONS_PRIVATE_KEY }}
envs: GITHUB_SHA
script: |
docker pull $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
I spent a lot of time and I can't really understand what's wrong. Any idea really appreciated.
Your issue is with the Env vars.
You are confusing the github server to the ec2 server.
The env var you mention in the github yml do no exist on the remote machine(ec2).
The cmd -> docker pull $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG failing cause the remote mahicne(ec2) do not know about your github env vars.
from the docs here you can see that there is an easy built-in way to pass it to the ec2 instance
- name: pass environment
uses: appleboy/ssh-action#master
+ env:
+ FOO: "BAR"
+ BAR: "FOO"
+ SHA: ${{ github.sha }}
with:
host: ${{ secrets.HOST }}
username: ${{ secrets.USERNAME }}
key: ${{ secrets.KEY }}
port: ${{ secrets.PORT }}
+ envs: FOO,BAR,SHA
script: |
echo "I am $FOO"
echo "I am $BAR"
echo "sha: $SHA"
hope it helps and goodluck
I would first add an ls and echo:
run: |
pwd
ls -arth
echo "docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG ."
docker ...
That way, I would check if I am in the right folder (with a Dockerfile in it), and if all variables are indeed valued.
If you see <aregistry>/<arepo>: (meaning no tag), the final ':' might be enough to trigger the error message.
That or:
the name use the wrong hyphen '-' as in this issue.
the image name has an invalid character like a \
the image name uses the wrong syntax
I'm trying to get Ansible to recreate an existing docker container in case one of the mounted files have changed. I tried to use docker_containerto remove the container, if it exists and any file has changed, before I deploy it using docker_stack and a compose file. Here is the code:
- name: Template configuration files to destination
template:
...
loop:
...
register: template_files_result
- name: Get docker container name
shell: "docker ps -f 'name=some_name' -q"
register: container_id
- name: Remove container
docker_container:
name: container_id.stdout
force_kill: yes
state: absent
vars:
files_changed: "{{ template_files_result | json_query('results[*].changed') }}"
when: container_id.stdout and files_changed is any
- name: Deploy
docker_stack:
state: present
name: stack_name
compose:
- "compose.yml"
with_registry_auth: true
However, the Remove container task never does anything and I can't figure out why.
What am I missing?
I am trying to write a custom github-action that runs some commands in a docker container but allows the user to select which docker container they are run in (i.e. so I can run the same build instructions across different versions of the runtime environment)
My gut instinct was to have my .github/actions/main/action.yml file as
name: 'Docker container command execution'
inputs:
dockerfile:
default: Dockerfile_r_latest
runs:
using: 'docker'
image: '${{ inputs.dockerfile }}'
args:
- /scripts/commands.sh
However this errors with:
##[error](Line: 7, Col: 10): Unrecognized named-value: 'inputs'. Located at position 1 within expression: inputs.dockerfile
Any help would be appreciated !
File References
My .github/workflow/build_and_test.yml file is:
name: Test Package
on:
[push, pull_request]
jobs:
R_latest:
name: Test on latest
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#master
name: Checkout project
- uses: ./.github/actions/main
name: Build and test
with:
dockerfile: Dockerfile_r_latest
And my Dockerfile .github/actions/main/Dockerfile_r_latest is:
FROM rocker/verse:latest
ADD scripts /scripts
ENTRYPOINT [ "bash", "-c" ]
Interesting approach! I'm not sure if it's possible to use expressions in the image field of the action metadata. I would guess that the only fields that can take expressions instead of hardcoded strings are the args for the image so that the inputs can be passed.
For reference this is the args section of the action.yml metadata.
https://help.github.com/en/articles/metadata-syntax-for-github-actions#args
I think there are other ways to achieve what you want to do. Have you tried using the jobs.<job_id>.container syntax? That allows you to specify an image that the steps of a job will run in. It will require that you publish the image to a public repository, though. So take care not to include any secrets.
For example, if you published your image to Docker Hub at gowerc/r-latest your workflow might look something like this:
name: Test Package
on:
[push, pull_request]
jobs:
R_latest:
name: Test on latest
runs-on: ubuntu-latest
container: gowerc/r-latest
steps:
- uses: actions/checkout#master
name: Checkout project
- name: Build and test
run: ./scripts/commands.sh
ref: https://help.github.com/en/articles/workflow-syntax-for-github-actions#jobsjob_idcontainer
Alternatively, you can also specify your image at the step level with uses. You could then pass a command via args to execute your script.
name: my workflow
on: push
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#master
- name: Check container
uses: docker://alpine:3.8
with:
args: /bin/sh -c "cat /etc/alpine-release"
ref: https://help.github.com/en/github/automating-your-workflow-with-github-actions/workflow-syntax-for-github-actions#example-using-a-docker-hub-action
In addition to #peterevans answer, I would add there's a 3rd option where you can use a simple docker run command and pass any env that you have defined.
That helped to solve 3 things :
Reuse a custom docker image being build within the steps for testing actions. It seems not possible to do so with uses as it first tries to pull that image that doesn't exist yet in a Setup job step that occurs before any steps of the job.
This specific image can also be stored in a private docker registry
Be able to use a variable for the docker image
My workflow looks like this :
name: Build-Test-Push
on:
push:
branches:
- master
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
ECR_REGISTRY: ${{ secrets.AWS_ECR_REGISTRY }}
ECR_REPOSITORY: myproject/myimage
IMAGE_TAG: ${{ github.sha }}
jobs:
build-and-push:
runs-on: ubuntu-latest
steps:
- name: Checking out
uses: actions/checkout#v2
with:
ref: master
- name: Login to AWS ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login#v1
- name: Build
run: |
docker pull $ECR_REGISTRY/$ECR_REPOSITORY || true
docker build . -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG -t $ECR_REGISTRY/$ECR_REPOSITORY:latest
- name: Test
run: |
docker run $ECR_REGISTRY/$ECR_REPOSITORY:latest /bin/bash -c "make test"
- name: Push
run: |
docker push $ECR_REGISTRY/$ECR_REPOSITORY
Here is another approach. The Docker image to use is passed to a cibuild shell script that takes care of pulling the right image.
GitHub workflow file:
name: 'GH Actions CI'
on:
push:
branches: ['*master', '*0.[0-9]?.x']
pull_request:
# The branches below must be a subset of the branches above
branches: ['*master', '*0.[0-9]?.x']
jobs:
build:
name: Build
runs-on: ubuntu-latest
strategy:
fail-fast: true
matrix:
include:
- FROM: 'ubuntu:focal'
- FROM: 'ubuntu:bionic'
- FROM: 'ubuntu:xenial'
- FROM: 'debian:buster'
- FROM: 'debian:stretch'
- FROM: 'opensuse/leap'
- FROM: 'fedora:33'
- FROM: 'fedora:32'
- FROM: 'centos:8'
steps:
- name: Checkout repository
uses: actions/checkout#v2
with:
# We must fetch at least the immediate parents so that if this is
# a pull request then we can checkout the head.
fetch-depth: 2
# If this run was triggered by a pull request event, then checkout
# the head of the pull request instead of the merge commit.
- run: git checkout HEAD^2
if: ${{ github.event_name == 'pull_request' }}
- name: Run CI
env:
FROM: ${{ matrix.FROM }}
run: script/cibuild
Bash script script/cibuild:
#!/bin/bash
set -e
docker run --name my-docker-container $FROM script/custom-script.sh
docker cp my-docker-container:/usr/src/my-workdir/my-outputs .
docker rm my-docker-container
echo "cibuild Done!"
Put your custom commands in script/custom-script.sh.