My cloudbuild.yaml is failing. Please review my cloudbuild.yaml - docker

I am trying to set a react app to a kubernetes cluster. All my kubernetes files resides in k8s/ folder. In k8s/ folder I have a deployment.yaml and service.yaml file.
The below is my cloudbuild.yaml file which resides in the root folder. This part gcr.io/cloud-builders/kubectl Stage 3 is failing. I get the below error
build step 2 "gcr.io/cloud-builders/kubectl" failed: step exited with non-zero status: 1
steps:
# Build the image - Stage 1
- name: 'gcr.io/cloud-builders/docker'
args: ['build','-t','gcr.io/${_PROJECT}/${_CONTAINERNAME}:${_VERSION}','.']
timeout: 1500s
# Push the image - Stage 2
- name: 'gcr.io/cloud-builders/docker'
args: ['push','gcr.io/${_PROJECT}/${_CONTAINERNAME}:${_VERSION}']
# Deploy changes to kubernetes config files - Stage 3
- name: "gcr.io/cloud-builders/kubectl"
args: ["apply", "-f", "k8s/"]
env:
- 'CLOUDSDK_COMPUTE_ZONE=${_ZONE}'
- 'CLOUDSDK_CONTAINER_CLUSTER=${_GKE_CLUSTER}'
# These are variable substitutions
substitutions:
#GCP Specific configuration. Please DON'T change anything
_PROJECT: my-projects-121212
_ZONE: us-central1-c
_GKE_CLUSTER: cluster-1
#Repository Specific configuration. DevOps can change this settings
_DEPLOYMENTNAME: react
_CONTAINERNAME: react
_REPO_NAME: react-app
# Developers ONLY change
_VERSION: v1.0
options:
substitution_option: 'ALLOW_LOOSE'
machineType: 'N1_HIGHCPU_8'
timeout: 2500s

In step 3, there are double quotes name: "gcr.io/cloud-builders/kubectl"
If you replace them with single quotes, the issue should be fixed.

Related

Is Azure Pipelines supposed to generate my Kubernetes Manifest files? If so, why isn't mine?

I've been struggling with this for weeks, so I'm finally reaching out.
From what I understand, Azure DevOps pipelines are able to generate a start-to-finish YAML file that builds and pushes docker files into Azure Container Registry, and then employs Kubernetes to generate manifests files as artifacts and subsequently use the generated manifests files to deploy our multi-container application into Azure Kubernetes Service. Is that a bad understanding? Do I need to have my manifest files written myself before using the pipeline? If so, is there a better way to generate the manifests files? Currently I've tried doing it by hand, line by line, but I'm running into issues.
I've attached the auto-generated YAML file to this post - I've gone through and hidden personal/private details from the code. I've been able to get it to do the first stage without issue - composing/pushing docker files to ACR, but the deploy stage fails every time. For various reasons - I'm guessing because my manifest files are incorrectly written.
# Starter pipeline
# Start with a minimal pipeline that you can customize to build and deploy your code.
# Add steps that build, run tests, deploy, and more:
# https://aka.ms/yaml
trigger:
- master
resources:
- repo: self
variables:
# Container registry service connection established during pipeline creation
dockerRegistryServiceConnection: 'HIDDEN'
imageRepository: 'dec7'
containerRegistry: 'HIDDEN'
dockerfilePath: '**/Dockerfile'
buildContext: 1.x/trunk/src/
tag: '$(Build.BuildId)'
imagePullSecret: 'HIDDEN'
# Agent VM image name
vmImageName: 'ubuntu-20.04'
# Name of the new namespace being created to deploy the PR changes.
k8sNamespaceForPR: 'review-app-$(System.PullRequest.PullRequestId)'
stages:
- stage: Build
displayName: Build stage
jobs:
- job: Build
displayName: Build
pool:
vmImage: $(vmImageName)
steps:
- task: DockerCompose#0
displayName: 'Build services'
inputs:
containerregistrytype: 'Azure Container Registry'
azureSubscription: HIDDEN
azureContainerRegistry: 'HIDDEN'
dockerComposeFile: '1.x/trunk/src/docker-compose.yml'
dockerComposeFileArgs: 'DOCKER_BUILD_SOURCE='
action: 'Build services'
additionalImageTags: '$(Build.BuildId)'
- task: DockerCompose#0
displayName: 'Push services'
inputs:
containerregistrytype: 'Azure Container Registry'
azureSubscription: HIDDEN
azureContainerRegistry: 'HIDDEN'
dockerComposeFile: '1.x/trunk/src/docker-compose.yml'
dockerComposeFileArgs: 'DOCKER_BUILD_SOURCE='
action: 'Push services'
additionalImageTags: '$(Build.BuildId)'
- task: DockerCompose#0
displayName: 'Lock services'
inputs:
containerregistrytype: 'Azure Container Registry'
azureSubscription: HIDDEN
azureContainerRegistry: 'HIDDEN'
dockerComposeFile: '1.x/trunk/src/docker-compose.yml'
dockerComposeFileArgs: 'DOCKER_BUILD_SOURCE='
action: 'Lock services'
outputDockerComposeFile: '$(Build.StagingDirectory)/docker-compose.yml'
- upload: manifests
artifact: manifests
- stage: Deploy
displayName: Deploy stage
dependsOn: Build
jobs:
- deployment: Deploy
condition: and(succeeded(), not(startsWith(variables['Build.SourceBranch'], 'refs/pull/')))
displayName: Deploy
pool:
vmImage: $(vmImageName)
environment: HIDDEN
strategy:
runOnce:
deploy:
steps:
- checkout: self
- task: KubernetesManifest#0
displayName: Create imagePullSecret
inputs:
action: 'createSecret'
kubernetesServiceConnection: 'AKSServiceConnectionDec6'
secretType: 'dockerRegistry'
secretName: '$(imagePullSecret)'
dockerRegistryEndpoint: '$(dockerRegistryServiceConnection)'
- task: KubernetesManifest#0
displayName: Deploy to Kubernetes cluster
inputs:
action: 'deploy'
kubernetesServiceConnection: 'AKSServiceConnectionDec6'
manifests: |
$(Pipeline.Workspace)/manifests/deployment.yml
$(Pipeline.Workspace)/manifests/service.yml
containers: '$(containerRegistry)/$(imageRepository):$(tag)'
imagePullSecrets: '$(imagePullSecret)'
- deployment: DeployPullRequest
displayName: Deploy Pull request
condition: and(succeeded(), startsWith(variables['Build.SourceBranch'], 'refs/pull/'))
pool:
vmImage: $(vmImageName)
environment: 'HIDDEN$(k8sNamespaceForPR)'
strategy:
runOnce:
deploy:
steps:
- reviewApp: HIDDEN
- task: Kubernetes#1
displayName: 'Create a new namespace for the pull request'
inputs:
connectionType: 'Kubernetes Service Connection'
kubernetesServiceEndpoint: 'AKSServiceConnectionDec6'
command: 'apply'
useConfigurationFile: true
secretType: 'dockerRegistry'
containerRegistryType: 'Azure Container Registry'
- task: KubernetesManifest#0
displayName: Create imagePullSecret
inputs:
action: createSecret
secretName: $(imagePullSecret)
namespace: $(k8sNamespaceForPR)
dockerRegistryEndpoint: $(dockerRegistryServiceConnection)
- task: KubernetesManifest#0
displayName: Deploy to the new namespace in the Kubernetes cluster
inputs:
action: 'deploy'
kubernetesServiceConnection: 'AKSServiceConnectionDec6'
namespace: '$(k8sNamespaceForPR)'
manifests: |
$(Pipeline.Workspace)/manifests/deployment.yml
$(Pipeline.Workspace)/manifests/service.yml
containers: '$(containerRegistry)/$(imageRepository):$(tag)'
imagePullSecrets: '$(imagePullSecret)'
- task: Kubernetes#1
name: get
displayName: 'Get services in the new namespace'
continueOnError: true
inputs:
connectionType: 'Kubernetes Service Connection'
kubernetesServiceEndpoint: 'AKSServiceConnectionDec6'
namespace: '$(k8sNamespaceForPR)'
command: 'get'
arguments: 'svc'
secretType: 'dockerRegistry'
containerRegistryType: 'Azure Container Registry'
outputFormat: 'jsonpath=''http://{.items[0].status.loadBalancer.ingress[0].ip}:{.items[0].spec.ports[0].port}'''
# Getting the IP of the deployed service and writing it to a variable for posing comment
- script: |
url="$(get.KubectlOutput)"
message="Your review app has been deployed"
if [ ! -z "$url" -a "$url" != "http://:" ]
then
message="${message} and is available at $url.<br><br>[Learn More](https://aka.ms/testwithreviewapps) about how to test and provide feedback for the app."
fi
echo "##vso[task.setvariable variable=GITHUB_COMMENT]$message"
I've tried generating new pipelines from scratch using both the classic editor as well as the new editor Microsoft provides. I get an issue with the build stage not being able to find the working directory. I fix this by specifying that manually. However, once the pipeline gets to the deploy stage I get the following error:
##[error]No manifest file(s) matching /home/vsts/work/1/manifests/deployment.yml,/home/vsts/work/1/manifests/service.yml was found.
This tells me that the pipeline isn't generating manifest files like I thought it was supposed to. So I wrote one myself, probably incorrectly, and it ran once - but timed out. Now I get the following error after running the deploy stage with an altered manifest file:
error: deployment "v4deployment" exceeded its progress deadline
##[error]Error: error: deployment "v4deployment" exceeded its progress deadline
Is that a bad understanding?
Yes. You still have to author your deployment manifests. The pipeline can apply the manifests to the cluster, but it's not going to generate anything for you.
Assmption: Your cluster is AKS
Is that a bad understanding?
Nope its correct understading. When you create new pipeline and select option Deploy to Azure Kubernetes Service, this option will ask for Azure Subscription and seleting all option will generate pipeline yaml along with kubernetes manifest file inside manifest folder under root of your repository. You can modify/ updated these manifest files as per your need. I have removed some of personal details from snapshot.
##[error]No manifest file(s) matching /home/vsts/work/1/manifests/deployment.yml,/home/vsts/work/1/manifests/service.yml
was found.
You have to check the path of manifest files generated inside your repo and provide the absolute path inside your pipeline yaml file. eg for our case manifest files were hosted under root --> manifests --> framework --> develop. So my yaml was like this.
- task: KubernetesManifest#0
displayName: Deploy to Kubernetes cluster
inputs:
action: deploy
manifests: |
$(Pipeline.Workspace)/manifests/framework/develop/deployment.yml
$(Pipeline.Workspace)/manifests/framework/develop/service.yml
imagePullSecrets: |
$(imagePullSecret)
containers: |
$(containerRegistry)/$(imageRepository):$(tag)
error: deployment "v4deployment" exceeded its progress deadline
##[error]Error: error: deployment "v4deployment" exceeded its progress deadline
This error indicates that your deployment is done. Now deployment is waiting for all application pods to be in running state but somehow pods are not getting ready due to error. To check the error you can access your cluster (kubeconfig or dashboard) and check the namespace events or you can check pod events / logs directly. These two commands will give you enough evidence why your pods are not healthy.
kubectl describe pod your-pod-name -n your-namespace
kubectl logs -f your-pod-name -n your-namespace

mirror: Access failed: /opt/atlassian/pipelines/agent/build/dist/*: No such file or directory

i am new in using bitbucket pipelines. I have an issue related with deploying my dist file to ftp server. this is an error "mirror: Access failed: /opt/atlassian/pipelines/agent/build/dist/*: No such file or directory" that occurs when i am trying to deploy project.
this is my bitbucket.yml file
# Template NodeJS build
# This template allows you to validate your NodeJS code.
# The workflow allows running tests and code linting on the default branch.
image: node:16
pipelines:
branches:
master:
- step:
name: Install dependencies
caches:
- node
script:
- npm install
artifacts:
- node_modules/** # Save modules for next steps
- step:
name: Build project
caches:
- node
script:
- npm run build
artifacts:
- dist/** # Save build for next steps
- step:
name: Deploy to Production
trigger: manual
deployment: Production
script:
- pipe: atlassian/ftp-deploy:0.3.7
variables:
USER: $FTP_USERNAME
PASSWORD: $FTP_PASSWORD
SERVER: $FTP_HOST
REMOTE_PATH: '/var/www/*******/booking.crt-minds.ru/'
LOCAL_PATH: 'dist/*'
EXTRA_ARGS: "--exclude=.bitbucket/ --exclude=.git/ --exclude=bitbucket-pipelines.yml --exclude=.gitignore" # Ignore these
I have tried to delete local_path in yml and see what happened. but first of all i do not understand if my pipeline has access to ftp server. How can i check it? so then i need to understand how to replace dist folder files in ftp server? May be my bitbucket.yml file incorrect configured?
Telling from the pipe's documentation
LOCAL_PATH: Optional path to local directory to upload. Default ${BITBUCKET_CLONE_DIR}.
I bet it is interpreting the value you passed not as glob pattern but literally a folder named dist/*
Try to drop that /*:
- step:
script:
- pipe: atlassian/ftp-deploy:0.3.7
variables:
USER: $FTP_USERNAME
PASSWORD: $FTP_PASSWORD
SERVER: $FTP_HOST
REMOTE_PATH: /var/www/site
LOCAL_PATH: dist

Kubernetes - How to setup parallels clusters (Prod, Dev) sharing the same repositories/pipeline

I am currently facing a situation where I need to deploy a small cluster (only 4 pods for now) which will be containing 4 different microservices. This cluster has to be duplicated so I can have one PRODUCTION cluster and one DEVELOPMENT cluster.
Even if it's not hard from my point of view (Creating a cluster and then uploading docker images to pods with parameters in order to use the right resources connection strings), I am stuck at the CD/CI part..
From a CloudBuild trigger, how to push the Docker image to the right "cluster's pod", I have absolutely no idea AT ALL how to achieve it...
There is my cloudbuild.yaml
steps:
#step 1 - Getting the previous (current) image
- name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args: [
'-c',
'docker pull gcr.io/{PROJECT_ID}/{SERVICE_NAME}:latest || exit 0'
]
#step 2 - Build the image and push it to gcr.io
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'-t',
'gcr.io/{PROJECT_ID}/{SERVICE_NAME}',
'-t',
'gcr.io/{PROJECT_ID}/{SERVICE_NAME}:latest',
'.'
]
#step 3 - Deploy our container to our cluster
- name: 'gcr.io/cloud-builders/kubectl'
args: ['apply', '-f', 'service.yaml', '--force']
env:
- 'CLOUDSDK_COMPUTE_ZONE={CLUSTER_ZONE}'
- 'CLOUDSDK_CONTAINER_CLUSTER={CLUSTER_NAME}'
#step 4 - Set the image
- name: 'gcr.io/cloud-builders/kubectl'
args: [
'set',
'image',
'deployment',
'{SERVICE_NAME}',
'{SERVICE_NAME}=gcr.io/{PROJECT_ID}/{SERVICE_NAME}'
]
env:
- 'CLOUDSDK_COMPUTE_ZONE={CLUSTER_ZONE}'
- 'CLOUDSDK_CONTAINER_CLUSTER={CLUSTER_NAME}'
# push images to Google Container Registry with tags
images: [
'gcr.io/{PROJECT_ID}/{SERVICE_NAME}',
'gcr.io/{PROJECT_ID}/{SERVICE_NAME}:latest'
]
Can anyone help me out? I don't really know in which direction to go to..
Did you know helm chart? It is designed for different environment deployment.
With different values.yaml file, you can quickly deploy to different environment with same source code base.
For example, you can name the values.yaml file with environment.
values-dev.yaml
values-sit.yaml
values-prod.yaml
the only differences are some varialbes, such as environment (dev/sit/prod), and namespaces.
so when you run the deployment, it will be:
env=${ENVIRONMENT}
helm install -f values-${env}.yaml myredis ./redis
So my question is:
How are you triggering these builds? Manually? GitHub Trigger? HTTP Trigger using the REST API?
so you're almost there for the building/pushing part, you would need to use substitution variables https://cloud.google.com/cloud-build/docs/configuring-builds/substitute-variable-values
If you would be triggering the builds manually, you would edit the build trigger and change the sub variable for what you want it to be.
GitHub Trigger -- this is a little more complex as you might want to do releases or branches.
HTTP Trigger, same as manual, in your request you change the sub variable.
So here's part of one of our repo build files, as you will see there are different sub. variables we use, sometimes we want to build the image AND deploy to cluster, other times we just want to build or deploy.
steps:
# pull docker image
- name: 'gcr.io/cloud-builders/docker'
id: pull-docker-image
entrypoint: 'bash'
args:
- '-c'
- |
docker pull $${_TAG_DOCKER_IMAGE} || exit 0
# build docker image
- name: 'gcr.io/cloud-builders/docker'
id: build-docker-image
entrypoint: 'bash'
args:
- '-c'
- |
if [[ "${_BUILD_IMAGE}" == "true" ]]; then
docker build -t ${_DOCKER_IMAGE_TAG} --cache-from $${_TAG_DOCKER_IMAGE} .;
else
echo "skipping ... BUILD_IMAGE=${_BUILD_IMAGE}";
fi
# push docker image
- name: 'gcr.io/cloud-builders/docker'
id: push-docker-image
entrypoint: 'bash'
args:
- '-c'
- |
if [[ "${_BUILD_IMAGE}" == "true" ]]; then
docker push ${_DOCKER_IMAGE_TAG};
else
echo "skipping ... BUILD_IMAGE=${_BUILD_IMAGE}";
fi
# tag docker image
- name: 'gcr.io/cloud-builders/gcloud'
id: tag-docker-image
entrypoint: 'bash'
args:
- '-c'
- |
if [[ "${_BUILD_IMAGE}" == "true" ]]; then
gcloud container images add-tag ${_DOCKER_IMAGE_TAG} $${_TAG_DOCKER_IMAGE} -q;
else
echo "skipping ... BUILD_IMAGE=${_BUILD_IMAGE}";
fi
# update service image on environment
- name: 'gcr.io/cloud-builders/kubectl'
id: update service deployment image
entrypoint: 'bash'
args:
- '-c'
- |
if [[ "${_UPDATE_CLUSTER}" == "true" ]]; then
/builder/kubectl.bash set image deployment $REPO_NAME master=${_DOCKER_IMAGE_TAG} --namespace=${_DEFAULT_NAMESPACE};
else
echo "skipping ... UPDATE_CLUSTER=${_UPDATE_CLUSTER}";
fi
env:
- 'CLOUDSDK_COMPUTE_ZONE=${_CLOUDSDK_COMPUTE_ZONE}'
- 'CLOUDSDK_CONTAINER_CLUSTER=${_CLOUDSDK_CONTAINER_CLUSTER}'
# subs are needed because of our different ENVs
# _DOCKER_IMAGE_TAG = ['gcr.io/$PROJECT_ID/$REPO_NAME:gcb-${_COMPANY_ENV}-$SHORT_SHA', 'other']
# _COMPANY_ENV = ['dev', 'staging', 'prod']
# _DEFAULT_NAMESPACE = ['default'] or ['custom1', 'custom2']
# _CLOUDSDK_CONTAINER_CLUSTER = ['dev', 'prod']
# _CLOUDSDK_COMPUTE_ZONE = ['us-central1-a']
# _BUILD_IMAGE = ['true', 'false']
# _UPDATE_CLUSTER = ['true', 'false']
substitutions:
_DOCKER_IMAGE_TAG: $DOCKER_IMAGE_TAG
_COMPANY_ENV: dev
_DEFAULT_NAMESPACE: default
_CLOUDSDK_CONTAINER_CLUSTER: dev
_CLOUDSDK_COMPUTE_ZONE: us-central1-a
_BUILD_IMAGE: 'true'
_UPDATE_CLUSTER: 'true'
options:
substitution_option: 'ALLOW_LOOSE'
env:
- _TAG_DOCKER_IMAGE=gcr.io/$PROJECT_ID/$REPO_NAME:${_COMPANY_ENV}-latest
- DOCKER_IMAGE_TAG=gcr.io/$PROJECT_ID/$REPO_NAME:gcb-${_COMPANY_ENV}-$SHORT_SHA
tags:
- '${_COMPANY_ENV}'
- 'build-${_BUILD_IMAGE}'
- 'update-${_UPDATE_CLUSTER}'
we have two workflows --
github trigger builds and deploys under the 'dev' environment.
we trigger via REST API https://cloud.google.com/cloud-build/docs/api/reference/rest/v1/projects.builds/create (we replace the variables via the request.json) -- this method also works using the gcloud builds --substitutions CLI.
Hope that answers your question!
The short answer for this is to apply GitOps deployment practices in your workflow.
All your Kubernetes YAMLs or Helmcharts are in a single git repository which is used by a GitOps operator.
In your CI pipeline, you only have to build and push docker images.
The GitOps operator intelligently fetches images versions and make changes and a commit to the target Kubernetes YAML in the repository.
Every change in the GitOps repository is applied to the cluster.
See https://fluxcd.io/

How to use a variable docker image in github-actions?

I am trying to write a custom github-action that runs some commands in a docker container but allows the user to select which docker container they are run in (i.e. so I can run the same build instructions across different versions of the runtime environment)
My gut instinct was to have my .github/actions/main/action.yml file as
name: 'Docker container command execution'
inputs:
dockerfile:
default: Dockerfile_r_latest
runs:
using: 'docker'
image: '${{ inputs.dockerfile }}'
args:
- /scripts/commands.sh
However this errors with:
##[error](Line: 7, Col: 10): Unrecognized named-value: 'inputs'. Located at position 1 within expression: inputs.dockerfile
Any help would be appreciated !
File References
My .github/workflow/build_and_test.yml file is:
name: Test Package
on:
[push, pull_request]
jobs:
R_latest:
name: Test on latest
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#master
name: Checkout project
- uses: ./.github/actions/main
name: Build and test
with:
dockerfile: Dockerfile_r_latest
And my Dockerfile .github/actions/main/Dockerfile_r_latest is:
FROM rocker/verse:latest
ADD scripts /scripts
ENTRYPOINT [ "bash", "-c" ]
Interesting approach! I'm not sure if it's possible to use expressions in the image field of the action metadata. I would guess that the only fields that can take expressions instead of hardcoded strings are the args for the image so that the inputs can be passed.
For reference this is the args section of the action.yml metadata.
https://help.github.com/en/articles/metadata-syntax-for-github-actions#args
I think there are other ways to achieve what you want to do. Have you tried using the jobs.<job_id>.container syntax? That allows you to specify an image that the steps of a job will run in. It will require that you publish the image to a public repository, though. So take care not to include any secrets.
For example, if you published your image to Docker Hub at gowerc/r-latest your workflow might look something like this:
name: Test Package
on:
[push, pull_request]
jobs:
R_latest:
name: Test on latest
runs-on: ubuntu-latest
container: gowerc/r-latest
steps:
- uses: actions/checkout#master
name: Checkout project
- name: Build and test
run: ./scripts/commands.sh
ref: https://help.github.com/en/articles/workflow-syntax-for-github-actions#jobsjob_idcontainer
Alternatively, you can also specify your image at the step level with uses. You could then pass a command via args to execute your script.
name: my workflow
on: push
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#master
- name: Check container
uses: docker://alpine:3.8
with:
args: /bin/sh -c "cat /etc/alpine-release"
ref: https://help.github.com/en/github/automating-your-workflow-with-github-actions/workflow-syntax-for-github-actions#example-using-a-docker-hub-action
In addition to #peterevans answer, I would add there's a 3rd option where you can use a simple docker run command and pass any env that you have defined.
That helped to solve 3 things :
Reuse a custom docker image being build within the steps for testing actions. It seems not possible to do so with uses as it first tries to pull that image that doesn't exist yet in a Setup job step that occurs before any steps of the job.
This specific image can also be stored in a private docker registry
Be able to use a variable for the docker image
My workflow looks like this :
name: Build-Test-Push
on:
push:
branches:
- master
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
ECR_REGISTRY: ${{ secrets.AWS_ECR_REGISTRY }}
ECR_REPOSITORY: myproject/myimage
IMAGE_TAG: ${{ github.sha }}
jobs:
build-and-push:
runs-on: ubuntu-latest
steps:
- name: Checking out
uses: actions/checkout#v2
with:
ref: master
- name: Login to AWS ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login#v1
- name: Build
run: |
docker pull $ECR_REGISTRY/$ECR_REPOSITORY || true
docker build . -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG -t $ECR_REGISTRY/$ECR_REPOSITORY:latest
- name: Test
run: |
docker run $ECR_REGISTRY/$ECR_REPOSITORY:latest /bin/bash -c "make test"
- name: Push
run: |
docker push $ECR_REGISTRY/$ECR_REPOSITORY
Here is another approach. The Docker image to use is passed to a cibuild shell script that takes care of pulling the right image.
GitHub workflow file:
name: 'GH Actions CI'
on:
push:
branches: ['*master', '*0.[0-9]?.x']
pull_request:
# The branches below must be a subset of the branches above
branches: ['*master', '*0.[0-9]?.x']
jobs:
build:
name: Build
runs-on: ubuntu-latest
strategy:
fail-fast: true
matrix:
include:
- FROM: 'ubuntu:focal'
- FROM: 'ubuntu:bionic'
- FROM: 'ubuntu:xenial'
- FROM: 'debian:buster'
- FROM: 'debian:stretch'
- FROM: 'opensuse/leap'
- FROM: 'fedora:33'
- FROM: 'fedora:32'
- FROM: 'centos:8'
steps:
- name: Checkout repository
uses: actions/checkout#v2
with:
# We must fetch at least the immediate parents so that if this is
# a pull request then we can checkout the head.
fetch-depth: 2
# If this run was triggered by a pull request event, then checkout
# the head of the pull request instead of the merge commit.
- run: git checkout HEAD^2
if: ${{ github.event_name == 'pull_request' }}
- name: Run CI
env:
FROM: ${{ matrix.FROM }}
run: script/cibuild
Bash script script/cibuild:
#!/bin/bash
set -e
docker run --name my-docker-container $FROM script/custom-script.sh
docker cp my-docker-container:/usr/src/my-workdir/my-outputs .
docker rm my-docker-container
echo "cibuild Done!"
Put your custom commands in script/custom-script.sh.

Concourse pending for long time before running task

I have a Concourse Pipeline with a Task using a Docker image that is stored in our local Artifactory server. Every time I start the Pipeline it takes about 5 mins until the tasks are finally run. The log looks like this:
I assume that Concourse somehow checks for newer versions of the Docker image. Unfortunately I have no chance to debug since all the logfiles on the Concourse worker VM offer no usable information.
My Questions:
How can I possibly debug what's going on, when Concourse says "preparing build" and the status is "pending".
Is there any chance to avoid Concourse from checking for a newer version of the Docker image? I tagged the Docker image with version latest - might this be an issue?
Any further ideas how I could speed things up?
Here is the detailed configuration of my pipeline and tasks:
pipeline.yml:
---
resources:
- name: concourse-image
type: docker-image
source:
repository: OUR_DOMAIN/subpath/concourse
username: ...
password: ...
insecure_registries:
- OUR_DOMAIN
# ...
jobs:
- name: deploy
public: true
plan:
- get: concourse-image
- task: create-manifest
image: concourse-image
file: concourse/tasks/create-manifest/task.yml
params:
# ...
task.yml:
---
platform: linux
inputs:
- name: git
- name: concourse
outputs:
- name: deployment-manifest
run:
path: concourse/tasks/create-and-upload-cloud-config/task.sh
The reason for this problem was that we pulled the Docker image from an internal Docker registry, which is running on HTTP only. Concourse tried to pull the image using HTTPS and it took around 5 mins until Concourse switched to HTTP (that's what a tcpdump on the worker showed us).
Changing the resource configuration to the following config solved the problems:
resources:
- name: concourse-image
type: docker-image
source:
repository: OUR_SERVER:80/subpath/concourse
username: docker-readonly
password: docker-readonly
insecure_registries:
- OUR_SERVER:80
So basically it was adding the port explicitly to the repository and the insecure_registries.

Resources