Currently trying to build and push docker images, issue is that I'm receiving a this message from concourse during wordpress-release put step:
waiting for docker to come up...
invalid reference format
Here's the important bit of the Concourse Pipeline.
- name: wordpress-release
type: docker-image
source:
repository: #############.dkr.ecr.eu-west-1.amazonaws.com/wordpress-release
aws_access_key_id: #############
aws_secret_access_key: #############
- name: mysql-release
type: docker-image
source:
repository: #############.dkr.ecr.eu-west-1.amazonaws.com/mysql-release
aws_access_key_id: #############
aws_secret_access_key: #############
jobs:
- name: job-hello-world
plan:
- get: wordpress-website
- task: write-release-tag
config:
platform: linux
image_resource:
type: registry-image
source: { repository: alpine/git }
inputs:
- name: wordpress-website
outputs:
- name: tags
run:
dir: wordpress-website
path: sh
args:
- -exc
- |
printf $(basename $(git remote get-url origin) | sed 's/\.[^.]*$/-/')$(git tag --points-at HEAD) > ../tags/release-tag
- put: wordpress-release
params:
build: ./wordpress-website/.
dockerfile: wordpress-website/shared-wordpress-images/wordpress/wordpress-release/Dockerfile
tag_file: tags/release-tag
- put: mysql-release
params:
build: ./wordpress-website/
dockerfile: wordpress-website/shared-wordpress-images/db/mysql-release/Dockerfile
tag_file: tags/release-tag
Those images contain FROM #############.dkr.ecr.eu-west-1.amazonaws.com/shared-mysql (and shared-wordpress) could this be an issue?
The tag_file: tags/release-tag, doesn't seem to be the issue as even without it, this still happens.
This is Concourse 5.0 running on top of Docker in Windows 10.
Any thoughts?
Related
I'm new to Drone pipeline and is interested to use it in my current project for CICD.
My project tech stack is as follows:
Java
Spring Boot
Maven
I have created a sample drone pipeline, but not able to cache the maven dependencies which is downloaded and stored in .m2 folder.
Always say the mount path is not available or not found. Please find the screen shot for the same:
Drone mount path issue
Not sure of the path to provide here. Can someone help me to understand the mount path which we need to provide to cache all the dependencies in .m2 path.
Adding the pipeline information below:
kind: pipeline
type: docker
name: config-server
steps:
name: restore-cache
image: meltwater/drone-cache
pull: if-not-exists
settings:
backend: "filesystem"
restore: true
cache_key: "volume"
archive_format: "gzip"
mount:
- ./target
- /root/.m2/repository
volumes:
name: cache
path: /tmp/cache
name: build
image: maven:3.8.3-openjdk-17
pull: if-not-exists
environment:
M2_HOME: /usr/share/maven
MAVEN_CONFIG: /root/.m2
commands:
mvn clean install -DskipTests=true -B -V
volumes:
name: cache
path: /tmp/cache
name: rebuild-cache
image: meltwater/drone-cache
pull: if-not-exists
settings:
backend: "filesystem"
rebuild: true
cache_key: "volume"
archive_format: "gzip"
mount:
- ./target
- /root/.m2/repository
volumes:
name: cache
path: /tmp/cache
trigger:
branch:
main
event:
push
volumes:
name: cache
host:
path: /var/lib/cache
Thanks in advance..
Resolved the issue. Please find the solution below and working drone pipeline.
kind: pipeline
type: docker
name: data-importer
steps:
- name: restore-cache
image: meltwater/drone-cache
pull: if-not-exists
settings:
backend: "filesystem"
restore: true
ttl: 1
cache_key: "volume"
archive_format: "gzip"
mount:
- ./.m2/repository
volumes:
- name: cache
path: /tmp/cache
- name: maven-build
image: maven:3.8.6-amazoncorretto-11
pull: if-not-exists
commands:
- mvn clean install -DskipTests=true -Dmaven.repo.local=.m2/repository -B -V
volumes:
- name: cache
path: /tmp/cache
- name: rebuild-cache
image: meltwater/drone-cache
pull: if-not-exists
settings:
backend: "filesystem"
rebuild: true
cache_key: "volume"
archive_format: "gzip"
ttl: 1
mount:
- ./.m2/repository
volumes:
- name: cache
path: /tmp/cache
trigger:
branch:
- main
- feature/*
event:
- push
volumes:
- name: cache
host:
path: /var/lib/cache
I'm working on a Tekton pipeline in a OpenShift k8s clusted, i want to build a Java application from a Dockerfile in kaniko and push to a private ECR registry.
But i'm getting a weird error in the build step:
[36mINFO[0m[0051] Retrieving image manifest private.repository.com:5000/custom_jdk8_tomcat8
[36mINFO[0m[0051] Retrieving image private.repository.com:5000/custom_jdk8_tomcat8 from registry private.repository.com:5000
[36mINFO[0m[0051] Built cross stage deps: map[]
[36mINFO[0m[0051] Retrieving image manifest registro.kolektor.com.ar:5000/jenkins_jdk8_tomcat8
[36mINFO[0m[0051] Returning cached image manifest
[36mINFO[0m[0051] Executing 0 build triggers
[36mINFO[0m[0051] Unpacking rootfs as cmd COPY target/WSRestDeudaR2.war "/opt/TOMCAT/webapps/javaApp.war" requires it.
error building image: error building stage: failed to get filesystem from image: failed to write "security.capability" attribute to "/usr/bin/ping": operation not permitted
step-write-url
2022/07/07 19:02:13 Skipping step because a previous step failed
Note
I'm working over a Dockerfile that uses images from a private repository, and i'm not able to modify the base image.
Dockerfile:
FROM private.repository.com:5000/custom_jdk8_tomcat8
COPY target/app.war "/opt/TOMCAT/webapps/app.war"
COPY pipeline/script/run.sh "/run.sh"
COPY apm-agent.jar /tmp/
Pipeline:
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
name: java-pipeline
spec:
description: |
This pipeline clones a git repo, then echoes the README file to the stdout.
params:
- name: git-repo-url
type: string
description: The git repo URL to clone from.
- name: git-repo-branch
description: The git branch where to clone from.
- name: ecr-registry
type: string
description: ECR Registry URL where to push the image
- name: ecr-repository
type: string
description: ECR Repository where to push the image
- name: ecr-image-tag
type: string
description: Image tag
workspaces:
- name: git-basic-auth
description: |
This workspace contains basic auth configuration to be used by git-clone
- name: aws-credentials
description: |
This workspace contains the aws credentials to authenticate to ECR
- name: source-dir
description: |
This workspace contains the cloned repo files, so they can be read by the
next task.
tasks:
###############
# GIT CLONE
###############
- name: git-clone
taskRef:
name: git-clone
params:
- name: url
value: $(params.git-repo-url)
- name: revision
value: $(params.git-repo-branch)
workspaces:
- name: output
workspace: source-dir
- name: basic-auth
workspace: git-basic-auth
###############
# ECR LOGIN
###############
- name: ecr-login
runAfter: ["git-clone"]
taskRef:
name: aws-ecr-login
workspaces:
- name: secrets
workspace: aws-credentials
params:
- name: region
value: us-east-1
###############
# MAVEN BUILD
###############
- name: maven-build-war
runAfter: ["ecr-login"]
taskRef:
name: maven-build
workspaces:
- name: source
workspace: source-dir
# ###############
# # KANIKO BUILD
# ###############
- name: build-push-kaniko
runAfter: ["maven-build-war"]
#runAfter: ["git-clone"]
taskRef:
name: kaniko
workspaces:
- name: source
workspace: source-dir
- name: aws-credentials
workspace: aws-credentials
params:
- name: IMAGE
value: "$(params.ecr-registry)/$(params.ecr-repository):$(params.ecr-image-tag)"
I'm getting the error at the kaniko build step.
Does anyone know why this script isn't working?
version: 2.1
orbs:
android: circleci/android#1.0.3
gcp-cli: circleci/gcp-cli#2.2.0
jobs:
build:
working_directory: ~/code
docker:
- image: cimg/android:2022.04
auth:
username: mydockerhub-user
password: $DOCKERHUB_PASSWORD
environment:
JVM_OPTS: -Xmx3200m
steps:
- checkout
- run:
name: Chmod permissions
command: sudo chmod +x ./gradlew
- run:
name: Download Dependencies
command: ./gradlew androidDependencies
- run:
name: Run Tests
command: ./gradlew lint test
- store_artifacts:
path: app/build/reports
destination: reports
- store_test_results:
path: app/build/test-results
nightly-android-test:
parameters:
system-image:
type: string
default: system-images;android-30;google_apis;x86
executor:
name: android/android-machine
resource-class: xlarge
steps:
- checkout
- android/start-emulator-and-run-tests:
test-command: ./gradlew connectedDebugAndroidTest
system-image: << parameters.system-image >>
- run:
name: Save test results
command: |
mkdir -p ~/test-results/junit/
find . -type f -regex ".*/build/outputs/androidTest-results/.*xml" -exec cp {} ~/test-results/junit/ \;
when: always
- store_test_results:
path: ~/test-results
- store_artifacts:
path: ~/test-results/junit
workflows:
unit-test-workflow:
jobs:
- build
nightly-test-workflow:
triggers:
- schedule:
cron: "0 0 * * *"
filters:
branches:
only:
- develop
jobs:
- nightly-android-test:
matrix:
alias: nightly
parameters:
system-image:
- system-images;android-30;google_apis;x86
- system-images;android-29;google_apis;x86
- system-images;android-28;google_apis;x86
- system-images;android-27;google_apis;x86
name: nightly-android-test-<<matrix.system-image>>
I keep getting the following build error:
Config does not conform to schema: {:workflows {:nightly-test-workflow {:jobs
[{:nightly-android-test {:matrix disallowed-key, :name disallowed-key}}]}}}
The second workflow seems to fail due to the matrix and name parameters but I can't see anything wrong in the script that would make them fail. I've tried looking at a yaml parser and couldn't see any null vaules and I tried the circle ci discussion forum with not a lot of luck.
I don't think that's the correct syntax. See the CircleCI documentation:
https://circleci.com/docs/2.0/configuration-reference/#matrix-requires-version-21
https://circleci.com/docs/2.0/using-matrix-jobs/
According to the above references, I believe it should be:
- nightly-android-test:
matrix:
alias: nightly
parameters:
system-image: ["system-images;android-30;google_apis;x86", "system-images;android-29;google_apis;x86", "system-images;android-28;google_apis;x86", "system-images;android-27;google_apis;x86"]
name: nightly-android-test-<<matrix.system-image>>
I'm trying to add a hold job into a workflow in CircleCI's config.yml file but I cannot make it work and I'm pretty sure it's a really simple error on my part (I just can't see it!).
When validating it with the CircleCI CLI locally running the following command
circleci config validate:
I get the following error
Error: Job 'hold' requires 'build-and-test-service', which is the name of 0 other jobs in workflow 'build-deploy'
This is the config.yml (note it's for a Serverless Framework application - not that that should make any difference)
version: 2.1
jobs:
build-and-test-service:
docker:
- image: timbru31/java-node
parameters:
service_path:
type: string
steps:
- checkout
- serverless/setup:
app-name: serverless-framework-orb
org-name: circleci
- restore_cache:
keys:
- dependencies-cache-{{ checksum "v2/shared/package-lock.json" }}-{{ checksum "package-lock.json" }}-{{ checksum "<< parameters.service_path >>/package-lock.json" }}
- dependencies-cache
- run:
name: Install dependencies
command: |
npm install
cd v2/shared
npm install
cd ../../<< parameters.service_path >>
npm install
- run:
name: Test service
command: |
cd << parameters.service_path >>
npm run test:ci
- store_artifacts:
path: << parameters.service_path >>/test-results/jest
prefix: tests
- store_artifacts:
path: << parameters.service_path >>/coverage
prefix: coverage
- store_test_results:
path: << parameters.service_path >>/test-results
deploy:
docker:
- image: circleci/node:lts
parameters:
service_path:
type: string
stage_name:
type: string
region:
type: string
steps:
- run:
name: Deploy application
command: |
cd << parameters.service_path >>
serverless deploy --verbose --stage << parameters.stage_name >> --region << parameters.region >>
- save_cache:
paths:
- node_modules
- << parameters.service_path >>/node_modules
key: dependencies-cache-{{ checksum "package-lock.json" }}-{{ checksum "<< parameters.service_path >>/package-lock.json" }}
orbs:
serverless: circleci/serverless-framework#1.0.1
workflows:
version: 2
build-deploy:
jobs:
# non-master branches deploys to stage named by the branch
- build-and-test-service:
name: Build and test campaign
service_path: v2/campaign
filters:
branches:
only: develop
- hold:
name: hold
type: approval
requires:
- build-and-test-service
- deploy:
service_path: v2/campaign
stage_name: dev
region: eu-west-2
requires:
- hold
It's obvious the error relates to the hold step (near the bottom of the config) not being able to find the build-and-test-service just above it but build-and-test-service does exist so am stumped at this point.
For anyone reading I figured out why it wasn't working.
Essentially I was using the incorrect property reference under the requires key:
workflows:
version: 2
build-deploy:
jobs:
# non-master branches deploys to stage named by the branch
- build-and-test-service:
name: Build and test campaign
service_path: v2/campaign
filters:
branches:
only: develop
- hold:
name: hold
type: approval
requires:
- build-and-test-service
The correct property reference in this case should have been the name of the previous step, i.e. Build and test campaign so I just changed that name to build-and-test-service.
I found the CircleCI docs were not very clear on this but perhaps it was because their examples surrounding manual approvals stated the requires property should be pointing to the root key of the job such as build-and-test-service.
I suppose I should have been more vigilant in my error reading too, it did mention name there as well.
I have concourse 3.8.0 running on my workstation which is Ubuntu 17.04 and
here is my pipeline definition:
---
jobs:
- name: job-docker-image-resource
public: true
plan:
- get: "golang_tools_docker_image"
- task: docker-image-resource
config:
platform: linux
image_resource:
type: docker-image
source: {repository: busybox}
run:
path: echo
args: [docker-image-resource]
resources:
- name: "golang_tools_docker_image"
type: docker-image
source:
repository: "golang"
tag: "1.9.2-alpine3.7"
resource_types:
- name: docker-image
type: docker-image
source:
repository: concourse/docker-image-resource
tag: docker-1.12.6
And here is the output:
This works fine in concourse 2.7.7. I haven't tried any versions between 2.7.7 and 3.8.0 yet.
You need privileged: true on the resource type definition:
resource_types:
- name: docker-image
privileged: true
type: docker-image
source:
repository: concourse/docker-image-resource
tag: latest