I'm using the ecr orb to build and push my image to the registry however, the circleci env vars are not available in the build process for some reason.
Here's my config.yml
( i've commented out some of the steps to troubleshoot )
version: 2.1
orbs:
aws-ecr: circleci/aws-ecr#3.1.0
aws-ecs: circleci/aws-ecs#0.0.8
jobs:
build:
docker:
- image: circleci/python:3.7
steps:
- checkout
- restore_cache:
keys:
- cache-{{ checksum "Pipfile.lock" }}
- cache-
- run:
name: Install dependencies
command: pipenv sync --dev
- save_cache:
key: cache-{{ checksum "Pipfile.lock" }}
paths:
- ~/.local
- ~/.cache
- run:
name: 'Lint Flake8'
command: pipenv run flake8
# - run:
# name: 'Test'
# command: |
# ENVIRONMENT=development pipenv run python src/manage.py test --noinput
workflows:
build-and-deploy:
jobs:
- build
- aws-ecr/build_and_push_image:
account-url: AWS_ECR_ACCOUNT_URL
aws-access-key-id: AWS_ACCESS_KEY_ID
aws-secret-access-key: AWS_SECRET_ACCESS_KEY
repo: "dev-portal"
region: AWS_DEFAULT_REGION
tag: "${CIRCLE_SHA1}"
requires:
- build
filters:
branches:
only: circleci/aws-ecs-deploy
# - aws-ecs/deploy-service-update:
# requires:
# - aws-ecr/build_and_push_image
# filters:
# branches:
# only: circleci/aws-ecs-deploy
# aws-region: AWS_DEFAULT_REGION
# family: "${AWS_RESOURCE_NAME_PREFIX}-service"
# cluster-name: "${AWS_RESOURCE_NAME_PREFIX}-cluster"
# container-image-name-updates: "container=${AWS_RESOURCE_NAME_PREFIX}-service,image-and-tag=${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_DEFAULT_REGION}.amazonaws.com/nginx-portal:${CIRCLE_SHA1}"
# verify-revision-is-deployed: true
# post-steps:
# - run:
# name: Test the deployment
# command: |
# TARGET_GROUP_ARN=$(aws ecs describe-services --cluster ${AWS_RESOURCE_NAME_PREFIX}-cluster --services ${AWS_RESOURCE_NAME_PREFIX}-service | jq -r '.services[0].loadBalancers[0].targetGroupArn')
# ELB_ARN=$(aws elbv2 describe-target-groups --target-group-arns $TARGET_GROUP_ARN | jq -r '.TargetGroups[0].LoadBalancerArns[0]')
# ELB_DNS_NAME=$(aws elbv2 describe-load-balancers --load-balancer-arns $ELB_ARN | jq -r '.LoadBalancers[0].DNSName')
What do i not understand about the orbs context / circleci's env vars?
Thanks y'all!
Related
This is my first time trying to CI to Google Cloud from Gitlab, so far has been this journey very painful, but I think I'm closer.
I follow some instructions from:
https://medium.com/google-cloud/deploy-to-cloud-run-using-gitlab-ci-e056685b8eeb
and I change to my needs the .gitlab-ci and the cloudbuild.yaml
After several tryouts, I finally manage to set all the Roles, Permissions and Service Accounts. But no luck building my docker file into the Container Registry or Artifact.
this is my failure log from gitlab log:
Running with gitlab-runner 14.6.0~beta.71.gf035ecbf (f035ecbf)
on green-3.shared.runners-manager.gitlab.com/default Jhc_Jxvh
Preparing the "docker+machine" executor
Using Docker executor with image google/cloud-sdk:latest ...
Pulling docker image google/cloud-sdk:latest ...
Using docker image sha256:2ec5b4332b2fb4c55f8b70510b82f18f50cbf922f07be59de3e7f93937f3d37f for google/cloud-sdk:latest with digest google/cloud-sdk#sha256:e268d9116c9674023f4f6aff680987f8ee48d70016f7e2f407fe41e4d57b85b1 ...
Preparing environment
Running on runner-jhcjxvh-project-32231297-concurrent-0 via runner-jhcjxvh-shared-1641939667-f7d79e2f...
Getting source from Git repository
$ eval "$CI_PRE_CLONE_SCRIPT"
Fetching changes with git depth set to 50...
Initialized empty Git repository in /builds/ProjectsD/node-projects/.git/
Created fresh repository.
Checking out 1f1e41f0 as dev...
Skipping Git submodules setup
Executing "step_script" stage of the job script
Using docker image sha256:2ec5b4332b2fb4c55f8b70510b82f18f50cbf922f07be59de3e7f93937f3d37f for google/cloud-sdk:latest with digest google/cloud-sdk#sha256:e268d9116c9674023f4f6aff680987f8ee48d70016f7e2f407fe41e4d57b85b1 ...
$ echo $GCP_SERVICE_KEY > gcloud-service-key.json
$ gcloud auth activate-service-account --key-file=gcloud-service-key.json
Activated service account credentials for: [gitlab-ci-cd#pdnodejs.iam.gserviceaccount.com]
$ gcloud config set project $GCP_PROJECT_ID
Updated property [core/project].
$ gcloud builds submit . --config=cloudbuild.yaml
Creating temporary tarball archive of 47 file(s) totalling 100.8 MiB before compression.
Some files were not included in the source upload.
Check the gcloud log [/root/.config/gcloud/logs/2022.01.11/22.23.29.855708.log] to see which files and the contents of the
default gcloudignore file used (see `$ gcloud topic gcloudignore` to learn
more).
Uploading tarball of [.] to [gs://pdnodejs_cloudbuild/source/1641939809.925215-a19e660f1d5040f3ac949d2eb5766abb.tgz]
Created [https://cloudbuild.googleapis.com/v1/projects/pdnodejs/locations/global/builds/577417e7-67b9-419e-b61b-f1be8105dd5a].
Logs are available at [https://console.cloud.google.com/cloud-build/builds/577417e7-67b9-419e-b61b-f1be8105dd5a?project=484193191648].
gcloud builds submit only displays logs from Cloud Storage. To view logs from Cloud Logging, run:
gcloud beta builds submit
BUILD FAILURE: Build step failure: build step 1 "gcr.io/cloud-builders/docker" failed: step exited with non-zero status: 1
ERROR: (gcloud.builds.submit) build 577417e7-67b9-419e-b61b-f1be8105dd5a completed with status "FAILURE"
Cleaning up project directory and file based variables
00:01
ERROR: Job failed: exit code 1
.gitlab-ci
# file: .gitlab-ci.yml
stages:
# - docker-build
- deploy_dev
# docker-build:
# stage: docker-build
# image: docker:latest
# services:
# - docker:dind
# before_script:
# - echo $CI_BUILD_TOKEN | docker login -u "$CI_REGISTRY_USER" --password-stdin $CI_REGISTRY
# script:
# - docker build --pull -t "$CI_REGISTRY_IMAGE" .
# - docker push "$CI_REGISTRY_IMAGE"
deploy_dev:
stage: deploy_dev
image: google/cloud-sdk:latest
script:
- echo $GCP_SERVICE_KEY > gcloud-service-key.json # google cloud service accounts
- gcloud auth activate-service-account --key-file=gcloud-service-key.json
- gcloud config set project $GCP_PROJECT_ID
- gcloud builds submit . --config=cloudbuild.yaml
cloudbuild.yaml
# File: cloudbuild.yaml
steps:
# build the container image
- name: 'gcr.io/cloud-builders/docker'
args: [ 'build', '-t', 'gcr.io/$PROJECT_ID/node-projects', '.' ]
# push the container image
- name: 'gcr.io/cloud-builders/docker'
args: [ 'push', 'gcr.io/$PROJECT_ID/node-projects']
# deploy to Cloud Run
- name: "gcr.io/cloud-builders/gcloud"
args: ['run', 'deploy', 'erp-ui', '--image', 'gcr.io/$PROJECT_ID/node-projects', '--region', 'us-central4', '--platform', 'managed', '--allow-unauthenticated']
options:
logging: CLOUD_LOGGING_ONLY
Is there any other configuration I'm missing inside GCP? or is something wrong with my files?
😮💨
UPDATE: I try and Success finally
I start to move around everything from scrath and I now achieve the correct deploy
.gitlab-ci
stages:
- build
- push
default:
image: docker:latest
services:
- docker:dind
before_script:
- echo $CI_BUILD_TOKEN | docker login -u "$CI_REGISTRY_USER" --password-stdin $CI_REGISTRY
docker-build:
stage: build
only:
refs:
- main
- dev
script:
- |
if [[ "$CI_COMMIT_BRANCH" == "$CI_DEFAULT_BRANCH" ]]; then
tag=""
echo "Running on default branch '$CI_DEFAULT_BRANCH': tag = 'latest'"
else
tag=":$CI_COMMIT_REF_SLUG"
echo "Running on branch '$CI_COMMIT_BRANCH': tag = $tag"
fi
- docker build --pull -t "$CI_REGISTRY_IMAGE${tag}" .
- docker push "$CI_REGISTRY_IMAGE${tag}"
# Run this job in a branch where a Dockerfile exists
interruptible: true
environment:
name: build/$CI_COMMIT_REF_NAME
push:
stage: push
only:
refs:
- main
- dev
script:
- apk upgrade --update-cache --available
- apk add openssl
- apk add curl python3 py-crcmod bash libc6-compat
- rm -rf /var/cache/apk/*
- curl https://sdk.cloud.google.com | bash > /dev/null
- export PATH=$PATH:/root/google-cloud-sdk/bin
- echo $GCP_SERVICE_KEY > gcloud-service-key-push.json # Google Cloud service accounts
- gcloud auth activate-service-account --key-file gcloud-service-key-push.json
- gcloud config set project $GCP_PROJECT_ID
- gcloud auth configure-docker us-central1-docker.pkg.dev
- tag=":$CI_COMMIT_REF_SLUG"
- docker pull "$CI_REGISTRY_IMAGE${tag}"
- docker tag "$CI_REGISTRY_IMAGE${tag}" us-central1-docker.pkg.dev/$GCP_PROJECT_ID/node-projects/node-js-app${tag}
- docker push us-central1-docker.pkg.dev/$GCP_PROJECT_ID/node-projects/node-js-app${tag}
environment:
name: push/$CI_COMMIT_REF_NAME
when: on_success
.cloudbuild.yaml
# File: cloudbuild.yaml
steps:
# build the container image
- name: 'gcr.io/cloud-builders/docker'
args:
[
'build',
'-t',
'us-central1-docker.pkg.dev/$PROJECT_ID/node-projects/nodejsapp',
'.',
]
# push the container image
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'us-central1-docker.pkg.dev/$PROJECT_ID/node-projects/nodejsapp']
# deploy to Cloud Run
- name: 'gcr.io/cloud-builders/gcloud'
args:
[
'beta',
'run',
'deploy',
'dreamslear',
'--image',
'us-central1-docker.pkg.dev/$PROJECT_ID/node-projects/nodejsapp',
'--region',
'us-central1',
'--platform',
'managed',
'--port',
'3000',
'--allow-unauthenticated',
]
And that worked!
if someone wants to give an optimised workflow or any advice, that would be great!
I have a setup that used to work for at least a couple of years without any issues. In my latest update, all of a sudden I am getting the following error:
#!/bin/bash -eo pipefail
./grailsw compile
You must be connected to the internet the first time you use the Grails wrapper
org.xml.sax.SAXParseException; lineNumber: 6; columnNumber: 3; The element type "hr" must be terminated by the matching end-tag "</hr>".
at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.createSAXParseException(ErrorHandlerWrapper.java:203)
at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.fatalError(ErrorHandlerWrapper.java:177)
at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:400)
at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:327)
at com.sun.org.apache.xerces.internal.impl.XMLScanner.reportFatalError(XMLScanner.java:1473)
at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanEndElement(XMLDocumentFragmentScannerImpl.java:1749)
at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2967)
at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:602)
at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:505)
at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:842)
at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:771)
at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:141)
at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1213)
at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:643)
at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl.parse(SAXParserImpl.java:327)
at javax.xml.parsers.SAXParser.parse(SAXParser.java:195)
at grails.init.Start.getVersion(Start.java:36)
at grails.init.Start.main(Start.java:83)
Exited with code exit status 1
CircleCI received exit code 1
And basically, I am getting the same error for any grailsw command.
Here is my config file:
version: 2.0
references:
defaults: &defaults
docker:
- image: circleci/openjdk:8-jdk
working_directory: ~/priz-be
remote_docker: &remote_docker
setup_remote_docker:
docker_layer_caching: false
assemble_prod: &assemble_prod
run:
name: Assemble
command: ./gradlew -Dgrails.env=prod assemble
build_and_push_docker_image: &build_and_push_docker_image
run:
name: Build docker image
command: |
sudo apt-get update --fix-missing
sudo apt-get install python-pip python-dev
sudo pip install awscli
cp -p build/libs/priz-be-0.1.war docker/app.war
aws ecr get-login --no-include-email --region us-west-2 | sh
docker build -t priz-be docker
docker tag priz-be:latest 922556357703.dkr.ecr.us-west-2.amazonaws.com/priz-be:prod-$CIRCLE_SHA1
docker push 922556357703.dkr.ecr.us-west-2.amazonaws.com/priz-be:prod-$CIRCLE_SHA1
jobs:
checkout_code:
<<: *defaults
steps:
- checkout
- run:
name: Show current branch
command: echo ${CIRCLE_BRANCH}
- save_cache:
key: v1-repo-{{ .Environment.CIRCLE_SHA1 }}
paths:
- ~/priz-be
compile:
<<: *defaults
steps:
- restore_cache:
keys:
- v1-repo-{{ .Environment.CIRCLE_SHA1 }}
- restore_cache:
keys:
- v1-dependencies-{{ checksum "build.gradle" }}
# fallback to using the latest cache if no exact match is found
- v1-dependencies-
- run:
name: Compile the project
command: ./grailsw compile
# Cache local dependencies if they don't exist
- save_cache:
paths:
- ~/.gradle
key: v1-dependencies-{{ checksum "build.gradle" }}
test_and_check:
<<: *defaults
steps:
- restore_cache:
keys:
- v1-repo-{{ .Environment.CIRCLE_SHA1 }}
- restore_cache:
keys:
- v1-dependencies-{{ checksum "build.gradle" }}
# fallback to using the latest cache if no exact match is found
- v1-dependencies-
- run:
name: Testing
command: ./grailsw test-app
- run:
name: Executing stylecheck
command: ./gradlew check
- store_artifacts:
path: ./build/reports/codenarc
destination: codenarc-report
- store_test_results:
path: ./build/test-results/test
- store_artifacts:
path: ./build/reports/tests
destination: test-report
deploy_to_prod:
<<: *defaults
steps:
- restore_cache:
keys:
- v1-repo-{{ .Environment.CIRCLE_SHA1 }}
- restore_cache:
keys:
- v1-dependencies-{{ checksum "build.gradle" }}
# fallback to using the latest cache if no exact match is found
- v1-dependencies-
- *assemble_prod
- *remote_docker
- *build_and_push_docker_image
- add_ssh_keys:
fingerprints:
- "9c:0c:ce:67:62:74:f1:d7:aa:b4:46:55:56:51:e5:f7"
- run:
name: Deploy
command: |
sudo apt-get update
sudo apt-get -y install gettext-base
sudo apt-get clean
envsubst < docker/deploy-prod.sh.template > docker/deploy-prod.sh
ssh -v -o StrictHostKeyChecking=no root#178.128.78.7 "bash -s" -- < ./docker/deploy-prod.sh
workflows:
version: 2
build-and-test:
jobs:
- checkout_code
- compile:
requires:
- checkout_code
- test_and_check:
requires:
- compile
- deploy_to_prod:
requires:
- test_and_check
filters:
branches:
only: master
If I log in with SSH, and trying to execute the same thing by hand, I am getting the same error. Also, checked if there is a network connection. All good...
What can be the reason for this error?
This is caused by a problem with the Artifactory instance. More information is available at github.com/grails/grails-wrapper/issues/7 and github.com/grails/grails-core/issues/11825.
The Grails Repository has been moved to an artifactory instance and the paths have changed.
You may have to update your maven repository url in build.gradle:
# Old repository url
maven { url "https://repo.grails.org/grails/core" }
# New url
maven { url "https://repo.grails.org/artifactory/core/" }
The only official information I found about this change was Important Information Regarding Grails and Bintray.
You can find all available repositories here. (In case you need the plugins repository or anything else.)
I hope that fixes your problem.
Regards
I am building my React Native iOS app with circleci 2.0, my build is stuck at Signing for about 40 minutes and not moving ahead.
I have my apple id with 2FA so have added FASTLANE_APPLE_APPLICATION_SPECIFIC_PASSWORD in environment variables.
My github id is also having 2FA and have deployed adhoc certificates on git with GH_TOKEN in circle ci env. variables.
My Fastlane file looks like this -
update_fastlane
default_platform(:ios)
platform :ios do
before_all do
setup_circle_ci
end
desc "Push a new beta build to TestFlight"
lane :beta do
increment_build_number(xcodeproj: "xxxApp.xcodeproj")
match(type:'adhoc')
gym(export_method: "ad-hoc")
build_app(workspace: "xxxApp.xcworkspace", scheme: "xxxApp")
# upload_to_testflight
end
end
My circle ci config.yml looks like this
version: 2
jobs:
node:
working_directory: ~/sekuraRN
docker:
- image: circleci/node:10.16.0
steps:
- checkout
- run:
name: set Ruby version
command: echo "ruby-2.4" > ~/.ruby-version
- run: npm install
- persist_to_workspace:
root: ~/xxxApp
paths:
- node_modules
ios:
macos:
xcode: '11.2.1'
resource_class: large
working_directory: ~/xxxApp
shell: /bin/bash --login -o pipefail
steps:
- checkout
- add_ssh_keys:
fingerprints:
- "XXXXXXXX"
- run:
name: set Ruby version
command: echo 'chruby ruby-2.5.7' >> ~/.bash_profile
- run:
name: Update Bundler version
command: sudo gem install bundler:2.1.1
- restore_cache:
key: npm-v1-{{ checksum "package-lock.json" }}-{{ arch }}
- restore_cache:
key: node-v1-{{ checksum "package.json" }}-{{ arch }}
- run: npm install
- save_cache:
key: npm-v1-{{ checksum "package-lock.json" }}-{{ arch }}
paths:
- ~/.cache/npm
- save_cache:
key: bundle-v1-{{ checksum "ios/Podfile.lock" }}
paths:
- ./Pods
- save_cache:
key: node-v1-{{ checksum "package.json" }}-{{ arch }}
paths:
- node_modules
- run:
command: bundle install
working_directory: ios
- save_cache:
key: bundle-v1-{{ checksum "ios/Gemfile.lock" }}-{{ arch }}
paths:
- vendor/bundle
- run:
name: Setup MFSDK Configuration
command: echo -e "machine repo.active.ai\nlogin docs#active.ai\npassword docs#123" > ~/.netrc
- run:
name: Uninstall Cocoapods
command: gem uninstall cocoapods
- run:
name: Install Cocoapods
command: gem install -n /usr/local/bin cocoapods
- run:
name: Pod Install
command: pod install
working_directory: ios
- run:
name: update fastlane
command: bundle update fastlane
working_directory: ios
- run:
name: Building IPA
no_output_timeout: 30m
command: bundle exec fastlane beta
working_directory: ios
- store_artifacts:
path: ios/xxx_app
destination: ipa/
workflows:
version: 2
node-android-ios:
jobs:
- node
- ios:
filters:
branches:
only:
- master
requires:
- node
Output from CircleCI -
[07:47:40]: ▸ the transform cache was reset.
[07:49:45]: ▸ Touching xxxApp.app
[07:49:45]: ▸ Signing /Users/distiller/Library/Developer/Xcode/DerivedData/…
It doesn't move ahead and stuck at this for 40 mins
I'm trying to cascade a series of tasking using the workflow syntax with circle ci. For some reason, only the build job seems to run - but my other jobs do not.
version: 2
jobs:
build:
docker:
- image: circleci/node:latest
steps:
- checkout
- restore_cache:
keys:
- sfdx-version-41-local
- run:
name: Install SFDX
command: pwd
- save_cache:
key: sfdx-version-41-local
paths:
- node_modules
auth:
steps:
- run:
name: Authenticate
command: ls -a
validate:
steps:
- run:
name: Validate
command: mkdir whocares
clean:
steps:
- run:
name: Remove Server Key
when: always
command: pwd
workflows:
version: 2
authenticate-and-deploy:
jobs:
- build
- auth
- validate
- clean
Ideally, I want to make sure each step finishes with a non zero exit code before moving to next step. But I'm not sure the the subsequent steps after build are not being executed.
Thanks,
If you really wants all of these jobs for some reason, the Workflows config is missing requirements and each job is missing its executor. I also fixed indenting. You can do this:
version: 2
jobs:
build:
docker:
- image: circleci/node:latest
steps:
- checkout
- restore_cache:
keys:
- sfdx-version-41-local
- run:
name: Install SFDX
command: pwd
- save_cache:
key: sfdx-version-41-local
paths:
- node_modules
auth:
docker:
- image: circleci/node:latest
steps:
- run:
name: Authenticate
command: ls -a
validate:
docker:
- image: circleci/node:latest
steps:
- run:
name: Validate
command: mkdir whocares
clean:
docker:
- image: circleci/node:latest
steps:
- run:
name: Remove Server Key
when: always
command: pwd
workflows:
version: 2
authenticate-and-deploy:
jobs:
- build
- auth:
requires: build
- validate:
requires: auth
- clean:
requires: validate
Here is what I ended up doing:
job-definition: &jobdef
docker:
- image: circleci/openjdk:latest-node-browsers
steps:
- checkout
- restore_cache:
keys:
- sfdx-6.8.2-local
- run:
name: Print branch
command: |
echo $CIRCLE_BRANCH
if [[$CIRCLE_BRANCH == 'master']];
then
echo "master";
else
echo "notMaster";
fi
- save_cache:
key: sfdx-6.8.2-local
paths:
- node_modules
- run:
name: "Validate Build"
command: |
#node_modules/sfdx-cli/bin/run force:mdapi:deploy -d src/ -u $USERNAME -c --testlevel RunLocalTests
echo CIRCLE_BRANCH
- run:
name: Push to Codecov.io
command: |
cp ~/tests/apex/test-result-codecoverage.json .
bash <(curl -s https://codecov.io/bash)
- store_artifacts:
path: ~/tests
- store_test_results:
path: ~/tests
version: 2
jobs:
static-analysis:
docker:
- image: circleci/openjdk:latest
steps:
- checkout
- restore_cache:
keys:
- pmd-v6.0.1
- run:
name: Install PMD
command: |
if [ ! -d pmd-bin-6.0.1 ]; then
curl -L "https://github.com/pmd/pmd/releases/download/pmd_releases/6.0.1/pmd-bin-6.0.1.zip" -o pmd-bin-6.0.1.zip
unzip pmd-bin-6.0.1.zip
rm pmd-bin-6.0.1.zip
fi
- save_cache:
key: pmd-v6.0.1
paths:
- pmd-bin-6.0.1
- run:
name: Run Static Analysis
command: |
pmd-bin-6.0.1/bin/run.sh pmd -d ./src/ -R $RULESET -f text -l apex -r static-analysis.txt -no-cache
- store_artifacts:
path: static-analysis.txt
build-enterprise:
<<: *jobdef
environment:
SCRATCH_DEF: workspace-scratch-def.json
URL: https://login.salesforce.com
NAME: $PROD_USERNAME
build-developer:
<<: *jobdef
environment:
SCRATCH_DEF: developer.json
URL: https://test.salesforce.com
NAME: $EVANDEV_USERNAME
workflows:
version: 2
test_and_static:
jobs:
- build-enterprise
#- build-developer
#- static-analysis
Two other thing I'm trying to do:
1. paramterize the build
2. run build on a PR
Are Instrumentation tests for Android Espresso available on CircleCI 2.0?
If yes, can anybody, please, help to configure config.yml file for me?
I’ve made thousand attempts and no luck. I can run unit tests, but not Instrumentation.
Thanks
The answer for this question is: yes. Instrumentation tests are possible for CircleCi. This is the configuration I have:
version: 2
jobs:
build:
working_directory: ~/code
docker:
- image: circleci/android:api-25-alpha
environment:
JVM_OPTS: -Xmx3200m
steps:
- checkout
- restore_cache:
key: jars-{{ checksum "build.gradle" }}-{{ checksum "app/build.gradle" }}
- run:
name: Chmod permissions #if permission for Gradlew Dependencies fail, use this.
command: sudo chmod +x ./gradlew
- run:
name: Download Dependencies
command: ./gradlew androidDependencies
- save_cache:
paths:
- ~/.gradle
key: jars-{{ checksum "build.gradle" }}-{{ checksum "app/build.gradle" }}
- run:
name: Setup emulator
command: sdkmanager "system-images;android-25;google_apis;armeabi-v7a" && echo "no" | avdmanager create avd -n test -k "system-images;android-25;google_apis;armeabi-v7a"
- run:
name: Launch emulator
command: export LD_LIBRARY_PATH=${ANDROID_HOME}/emulator/lib64:${ANDROID_HOME}/emulator/lib64/qt/lib && emulator64-arm -avd test -noaudio -no-boot-anim -no-window -accel on
background: true
- run:
name: Wait emulator
command: |
# wait for it to have booted
circle-android wait-for-boot
# unlock the emulator screen
sleep 30
adb shell input keyevent 82
- run:
name: Run Tests
command: ./gradlew connectedAndroidTest
- store_artifacts:
path: app/build/reports
destination: reports
- store_test_results:
path: app/build/test-results
The only problem with this configuration that it doesn't lead to successfull build because of not enough memory error. If somebody has better configuration, please, share.
I am running Android UI tests on CircleCI MacOS executor.
Here is my configuration:
version: 2
reference:
## Constants
gradle_cache_path: &gradle_cache_path
gradle_cache-{{ checksum "build.gradle" }}-{{ checksum "app/build.gradle" }}
workspace: &workspace
~/src
## Configurations
android_config: &android_config
working_directory: *workspace
macos:
xcode: "9.4.0"
shell: /bin/bash --login -eo pipefail
environment:
TERM: dumb
JVM_OPTS: -Xmx3200m
## Cache
restore_gradle_cache: &restore_gradle_cache
restore_cache:
key: *gradle_cache_path
save_gradle_cache: &save_gradle_cache
save_cache:
key: *gradle_cache_path
paths:
- ~/.gradle
## Dependency Downloads
download_android_dependencies: &download_android_dependencies
run:
name: Download Android Dependencies
command: ./gradlew androidDependencies
jobs:
ui_test:
<<: *android_config
steps:
- checkout
- run:
name: Setup environment variables
command: |
echo 'export PATH="$PATH:/usr/local/opt/node#8/bin:${HOME}/.yarn/bin:${HOME}/${CIRCLE_PROJECT_REPONAME}/node_modules/.bin:/usr/local/share/android-sdk/tools/bin"' >> $BASH_ENV
echo 'export ANDROID_HOME="/usr/local/share/android-sdk"' >> $BASH_ENV
echo 'export ANDROID_SDK_HOME="/usr/local/share/android-sdk"' >> $BASH_ENV
echo 'export ANDROID_SDK_ROOT="/usr/local/share/android-sdk"' >> $BASH_ENV
echo 'export QEMU_AUDIO_DRV=none' >> $BASH_ENV
echo 'export JAVA_HOME=/Library/Java/Home' >> $BASH_ENV
- run:
name: Install Android sdk
command: |
HOMEBREW_NO_AUTO_UPDATE=1 brew tap homebrew/cask
HOMEBREW_NO_AUTO_UPDATE=1 brew cask install android-sdk
- run:
name: Install emulator dependencies
command: (yes | sdkmanager "platform-tools" "platforms;android-26" "extras;intel;Hardware_Accelerated_Execution_Manager" "build-tools;26.0.0" "system-images;android-26;google_apis;x86" "emulator" --verbose) || true
- *restore_gradle_cache
- *download_android_dependencies
- *save_gradle_cache
- run: avdmanager create avd -n Pixel_2_API_26 -k "system-images;android-26;google_apis;x86" -g google_apis -d "Nexus 5"
- run:
name: Run emulator in background
command: /usr/local/share/android-sdk/tools/emulator #Pixel_2_API_26 -skin 1080x2066 -memory 2048 -noaudio
background: true
- run:
name: Run Tests
command: ./gradlew app:connectedAndroidTest
https://gist.github.com/DoguD/58b4b86a5d892130af84074078581b87
I hope it helps