Ansible playbook execute only specific tasks per environment - jenkins

I have a playbook with a bunch of tasks:
vars:
params_ENV_SERVER: "{{ lookup('env', 'ENV_SERVER') }}"
params_UML_SUFFIX: "{{ lookup('env', 'UML_SUFFIX') }}"
tasks:
- name: delete previous files
shell: ssh deploy#{{ params_ENV_SERVER }} sudo rm -rf /opt/jenkins-files/*
become: true
become_user: deploy
- name: create build dir
shell: ssh deploy#{{ params_ENV_SERVER }} sudo mkdir -p /opt/jenkins-files/build
become: true
become_user: deploy
- name: chown build dir
shell: ssh deploy#{{ params_ENV_SERVER }} sudo chown -R deploy:deploy /opt/jenkins-files
become: true
become_user: deploy
Which I calling from Jenkinsfile for PROD and QA env-s:
withEnv(["ENV_SERVER=192.168.1.30","UML_SUFFIX=stage-QA"]) {
sh "ansible-playbook nginx-depl.yml --limit 127.0.0.1"
}
withEnv(["ENV_SERVER=192.168.1.130","UML_SUFFIX=stage-PROD"]) {
sh "ansible-playbook nginx-depl.yml --limit 127.0.0.1"
Is it possible to modify playbook somehow, to execute on QA all tasks and on PROD only 2-nd and 3-rd?

Is this what are you looking for?
- name: delete previous files
shell: ssh deploy#{{ params_ENV_SERVER }} sudo rm -rf /opt/jenkins-files/*
become: true
become_user: deploy
when: "params_UML_SUFFIX == 'stage-QA'"
- name: create build dir
shell: ssh deploy#{{ params_ENV_SERVER }} sudo mkdir -p /opt/jenkins-files/build
become: true
become_user: deploy
when: "params_UML_SUFFIX == 'stage-QA'" or
"params_UML_SUFFIX == 'stage-PROD'"
- name: chown build dir
shell: ssh deploy#{{ params_ENV_SERVER }} sudo chown -R deploy:deploy /opt/jenkins-files
become: true
become_user: deploy
when: "params_UML_SUFFIX == 'stage-QA'" or
"params_UML_SUFFIX == 'stage-PROD'"
Optionally, "Ansible-way" would be creating the inventory
shell> cat hosts
[prod]
192.168.1.130
[qa]
192.168.1.30
and declare all hosts in the playbook
shell> cat playbook.yml
- hosts: all
tasks:
- debug:
msg: "Delete previous files.
Execute module file on {{ inventory_hostname }}"
when: inventory_hostname in groups.qa
- debug:
msg: "Create build dir.
Execute module file on {{ inventory_hostname }}"
when: inventory_hostname in groups.qa or
inventory_hostname in groups.prod
- debug:
msg: "Chown build dir.
Execute module file on {{ inventory_hostname }}"
when: inventory_hostname in groups.qa or
inventory_hostname in groups.prod
You can omit "become: true" and "become_user: deploy" and declare the remote user on the command-line. For example
shell> ansible-playbook -u deploy -i hosts playbook.yml
gives (abridged)
TASK [debug] ****
skipping: [192.168.1.130]
ok: [192.168.1.30] =>
msg: Delete previous files. Execute module file on 192.168.1.30
TASK [debug] ****
ok: [192.168.1.130] =>
msg: Create build dir. Execute module file on 192.168.1.130
ok: [192.168.1.30] =>
msg: Create build dir. Execute module file on 192.168.1.30
TASK [debug] ****
ok: [192.168.1.30] =>
msg: Chown build dir. Execute module file on 192.168.1.30
ok: [192.168.1.130] =>
msg: Chown build dir. Execute module file on 192.168.1.130
You can limit the execution to particular hosts or groups. For example, the command below would execute on prod group only
shell> ansible-playbook -u deploy -i hosts playbook.yml --limit prod
gives (abridged)
TASK [debug] ****
skipping: [192.168.1.130]
TASK [debug] ****
ok: [192.168.1.130] =>
msg: Create build dir. Execute module file on 192.168.1.130
TASK [debug] ****
ok: [192.168.1.130] =>
msg: Chown build dir. Execute module file on 192.168.1.130
Notes
"Ansible-way" is to execute modules on the remote hosts.
Replace the debug tasks with file
Integrate into one tasks "create build dir" and "chown build dir"
If you run the playbook as user deploy you can omit the parameter "-u deploy"

Related

GitHub Action creating Docker Host Context

Here is my attempt at creating a Docker Host Context via GitHub Actions:
name: CICD
on:
push:
branches:
- main
- staging
workflow_dispatch:
jobs:
build_and_deploy_monitoring:
concurrency: monitoring
runs-on: [self-hosted, linux, X64]
steps:
- uses: actions/checkout#v2
- name: Save secrets to mon.env files
run: |
echo "DATA_SOURCE_NAME=${{ secrets.DB_DATASOURCE }}" >> mon.env
echo "GF_SECURITY_ADMIN_USER=${{ secrets.GF_ADMIN_USER }}" >> mon.env
echo "GF_SECURITY_ADMIN_PASSWORD=${{ secrets.GF_ADMIN_PASS }}" >> mon.env
echo "DISCORD_TOKEN=${{ secrets.DISCORD_TOKEN }}" >> mon.env
echo "PROMCORD_PREFIX=promcord_" >> mon.env
echo "DB_CONNECTION_STRING=${{ secrets.DBC_STRING }}" >> mon.env
# - name: Setup SSH stuff
# run: |
# sudo mkdir -p ~/.ssh/
# sudo echo "${{ secrets.SSH_KEY }}" >> ~/.ssh/tempest
# sudo chmod 0400 ~/.ssh/tempest
# sudo echo "${{ secrets.KNOWN_HOSTS }}" >> ~/.ssh/known_hosts
# sudo echo -e "Host ${{ secrets.SSH_HOST }}\n\tHostName ${{ secrets.SSH_HOST }}\n\tUser ${{ secrets.SSH_USER }}\n\tIdentityFile ~/.ssh/tempest" >> ~/.ssh/config
- name: Install docker-compose
run: sudo pip install docker-compose
- name: Create context for docker host
run: docker context create remote --docker
- name: Set default context for docker
run: docker context use remote
- name: Always build the monitoring stack
run: COMPOSE_PARAMIKO_SSH=1 COMPOSE_IGNORE_ORPHANS=1 docker-compose --context remote -f docker-compose-monitoring.yml up --build -d
The output is:
0s
Run docker context create remote --docker
docker context create remote --docker
shell: /usr/bin/bash -e {0}
/actions-runner/actions-runner/_work/_temp/05fc146a-237e-4a92-b27d-796451184c0c.sh: line 1: docker: command not found
Error: Process completed with exit code 127.
I am trying to create a workflow that is able to create a docker compose for some monitoring tools. I have set up GitHub runners to do this and it has been successful for everything until the docker host section. The error is given above. Can I get some help as I am completely stumped?

Gitlab CI job with specific user

I am trying to run Gitlab CI job of anchore engine to scan docker image. The command in script section fails with error of permission denied. I found out the command requires root user permissions. Sudo is not installed in the docker image I'm using as gitlab runner and only non sudo user anchore is there in the docker container.
Below is the CI job for container scanning.
container_scan:
stage: scan
image:
name: anchore/anchore-engine:latest
entrypoint: ['']
services:
- name: anchore/engine-db-preload:latest
alias: anchore-db
variables:
GIT_STRATEGY: none
ANCHORE_HOST_ID: "localhost"
ANCHORE_ENDPOINT_HOSTNAME: "localhost"
ANCHORE_CLI_USER: "admin"
ANCHORE_CLI_PASS: "foobar"
ANCHORE_CLI_SSL_VERIFY: "n"
ANCHORE_FAIL_ON_POLICY: "true"
ANCHORE_TIMEOUT: "500"
script:
- |
curl -o /tmp/anchore_ci_tools.py https://raw.githubusercontent.com/anchore/ci-tools/master/scripts/anchore_ci_tools.py
chmod +x /tmp/anchore_ci_tools.py
ln -s /tmp/anchore_ci_tools.py /usr/local/bin/anchore_ci_tools
- anchore_ci_tools --setup
- anchore-cli registry add "$CI_REGISTRY" gitlab-ci-token "$CI_JOB_TOKEN" --skip-validate
- anchore_ci_tools --analyze --report --image "$IMAGE_NAME" --timeout "$ANCHORE_TIMEOUT"
- |
if ; then
anchore-cli evaluate check "$IMAGE_NAME"
else
set +o pipefail
anchore-cli evaluate check "$IMAGE_NAME" | tee /dev/null
fi
artifacts:
name: ${CI_JOB_NAME}-${CI_COMMIT_REF_NAME}
paths:
- image-*-report.json
The CI job fails at ln -s /tmp/anchore_ci_tools.py /usr/local/bin/anchore_ci_tools in the script section.
I have tried to add user in the entrypoint section
name: anchore/anchore-engine:latest
entrypoint: ['bash', '-c', 'useradd myuser && exec su myuser -c bash']
but it did not allow to create a user. I have tried running the docker container in linux with docker run -it --user=root anchore/anchore-engine:latest /bin/bash and it run without any problem. How can I simulate the same in gitlab-ci job?

Unable to run npm command in ansible awx_task container

I have been using ansible core for some time now and expanding my team so the need for ansible awx has become a little more pressing. I have been working at it for a week now and I think it's time to shout for help.
We had a process of replacing the baseurl of angularjs apps with some variable using ansible and set some settings before we compile it (currently thinking of a different way of doing this using build server like TeamCity but not right now we we are trying to be up with ansible awx).
ansible core checks out the code from the git branch version , replaces the variables and zip it to s3 etc.
Knowing that, the ansible awx host was configured with the nvm then node was installed and the .nvm mapped to /home/awx/.nvm
I have also mapped a bashrc to /home/awx/.bashrc. When I log into the awx_task container docker exec -it awx_task /bin/bash I see the below:
[root#awx ~]# npm --version
5.5.1
[root#awx ~]# echo $PATH /home/awx/.nvm/versions/node/v8.9.3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
[root#awx ~]# env
NVM_DIR=/home/awx/.nvm
LANG=en_US.UTF-8
HOSTNAME=awx
NVM_CD_FLAGS=
DO_ANSIBLE_HOME=/opt/do_ansible_awx_home
PWD=/home/awx
HOME=/home/awx
affinity:container==eb57afe832eaa32472812d0cd8b614be6df213d8e866f1d7b04dfe109a887e44
TERM=xterm
NVM_BIN=/home/awx/.nvm/versions/node/v8.9.3/bin
SHLVL=1
LANGUAGE=en_US:en
PATH=/home/awx/.nvm/versions/node/v8.9.3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
LESSOPEN=||/usr/bin/lesspipe.sh %s
_=/usr/bin/env
[root#awx ~]# cat /home/awx/.bashrc
# .bashrc
# User specific aliases and functions
alias rm='rm -i'
alias cp='cp -i'
alias mv='mv -i'
# Source global definitions
if [ -f /etc/bashrc ]; then
. /etc/bashrc
fi
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"
All the volume mappings, etc were done with the installer role templates and tasks so the output above is the same after multiple docker restart and reinstall running the ansible awx installer playbook. But during the execution of the playbook that makes use of the npm, it seems it has a different env PATH: /var/lib/awx/venv/ansible/bin:/var/lib/awx/venv/awx/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
At this point, I am not sure whether I failed to configure the path properly or other containers like awx_web should also be configured etc.
I have also noticed the env NVM_BIN and modified the npm playbook to include the path to the npm executable:
- name: Running install to build npm modules
npm:
path: "{{ bps_git_checkout_folder }}"
executable: "{{ lookup('env','NVM_BIN') }}/npm"
and it doens't even show during execution thus pointing at different path and env variables being loaded.
I will be grateful if you could shed some lights on whatever I am doing wrongly.
Thanks in advance
EDITS : After implementing #sergei suggestion I have used the extra vars npm_bin: /home/awx/.nvm/versions/node/v8.9.3/bin
I have changed the task to look like:
- name: Running install to build npm modules
npm:
path: "{{ bps_git_checkout_folder }}"
executable: "{{ npm_bin }}/npm"
But it produced this result:
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c 'echo ~root && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1579790680.4419668-165048670233209 `" && echo
ansible-tmp-1579790680.4419668-165048670233209="` echo /root/.ansible/tmp/ansible-tmp-1579790680.4419668-165048670233209 `" ) &&
sleep 0'
Using module file /usr/lib/python3.6/site-packages/ansible/modules/packaging/language/npm.py
<127.0.0.1> PUT /var/lib/awx/.ansible/tmp/ansible-local-10173xtu81x_o/tmpd40htayd TO /root/.ansible/tmp/ansible-tmp-1579790680.4419668-165048670233209/AnsiballZ_npm.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1579790680.4419668-165048670233209/ /root/.ansible/tmp/ansible-tmp-1579790680.4419668-165048670233209/AnsiballZ_npm.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1579790680.4419668-165048670233209/AnsiballZ_npm.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1579790680.4419668-165048670233209/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
Traceback (most recent call last):
File "/root/.ansible/tmp/ansible-tmp-1579790680.4419668-165048670233209/AnsiballZ_npm.py", line 114, in <module>
_ansiballz_main()
File "/root/.ansible/tmp/ansible-tmp-1579790680.4419668-165048670233209/AnsiballZ_npm.py", line 106, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/root/.ansible/tmp/ansible-tmp-1579790680.4419668-165048670233209/AnsiballZ_npm.py", line 49, in invoke_module
imp.load_module('__main__', mod, module, MOD_DESC)
File "/usr/lib64/python3.6/imp.py", line 235, in load_module
return load_source(name, filename, file)
File "/usr/lib64/python3.6/imp.py", line 170, in load_source
module = _exec(spec, sys.modules[name])
File "<frozen importlib._bootstrap>", line 618, in _exec
File "<frozen importlib._bootstrap…
PLAY RECAP
*********************************************************************
localhost : ok=5 changed=4 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
I have also tried to use shell module directly with the following:
- name: Running npm install
shell: "{{ npm_bin }}/npm install"
args:
chdir: "{{ bps_git_checkout_folder }}"
That has produced this instead:
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c 'echo ~root && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1579791187.453365-253173616238218 `" && echo
ansible-tmp-1579791187.453365-253173616238218="` echo /root/.ansible/tmp/ansible-tmp-1579791187.453365-253173616238218 `" ) &&
sleep 0'
Using module file /usr/lib/python3.6/site- packages/ansible/modules/commands/command.py
<127.0.0.1> PUT /var/lib/awx/.ansible/tmp/ansible-local-10395h1ga8fw3/tmpepeig729 TO /root/.ansible/tmp/ansible-tmp-1579791187.453365-253173616238218/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1579791187.453365-253173616238218/ /root/.ansible/tmp/ansible-tmp-1579791187.453365-253173616238218/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1579791187.453365-253173616238218/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1579791187.453365-253173616238218/ > /dev/null 2>&1 && sleep 0'
fatal: [localhost]: FAILED! => {
"changed": true,
"cmd": "/home/awx/.nvm/versions/node/v8.9.3/bin/npm install",
"delta": "0:00:00.005528",
"end": "2020-01-23 14:53:07.928843",
"invocation": {
"module_args": {
"_raw_params": "/home/awx/.nvm/versions/node/v8.9.3/bin/npm install",
"_uses_shell": true,
"argv": null,
"chdir": "/opt/do_ansible_awx_home/gh/deployments/sandbox/bps",
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true,
"warn": true
}
},
"msg": "non-zero return code",
"rc": 127,
…
PLAY RECAP
*********************************************************************
localhost : ok=5 changed=4 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
Not really seeing what's wrong here . Grateful if anybody can share some lights on this.
Where is your packages sitting? In the host or inside the container? All execution is in the task container.
If you're npm files are sitting on the 'host' and not in the container then you have to refer to the host that the containers are sitting on to to refer to the path.

CircleCI environmental variables for HEROKU not being set properly causing GIT to fail

I am a CircleCI user, and I am setting up an integration with Heroku.
I want to do the following, and setup security with integrations with dockerHub and also to Heroku from the CircleCI portal page, using this config.yml file.
The problem is that CircleCI doesn't seem to know what these variables should be set to, and instead just echos.
${HEROKU_API_KEY} ${HEROKU_APP}
config.yml
version: 2
jobs:
build:
working_directory: ~/springboot_swagger_example-master-cassandra
docker:
- image: circleci/openjdk:8-jdk-browsers
steps:
- checkout
- restore_cache:
key: springboot_swagger_example-master-cassandra-{{ checksum "pom.xml" }}
- run: mvn dependency:go-offline
- save_cache:
paths:
- ~/.m2
key: springboot_swagger_example-master-cassandra-{{ checksum "pom.xml" }}
- type: add-ssh-keys
- type: deploy
name: "Deploy to Heroku"
command: |
if [ "${CIRCLE_BRANCH}" == "master" ]; then
# Install Heroku fingerprint (this is heroku's own key, NOT any of your private or public keys)
echo 'heroku.com ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAu8erSx6jh+8ztsfHwkNeFr/SZaSOcvoa8AyMpaerGIPZDB2TKNgNkMSYTLYGDK2ivsqXopo2W7dpQRBIVF80q9mNXy5tbt1WE04gbOBB26Wn2hF4bk3Tu+BNMFbvMjPbkVlC2hcFuQJdH4T2i/dtauyTpJbD/6ExHR9XYVhdhdMs0JsjP/Q5FNoWh2ff9YbZVpDQSTPvusUp4liLjPfa/i0t+2LpNCeWy8Y+V9gUlDWiyYwrfMVI0UwNCZZKHs1Unpc11/4HLitQRtvuk0Ot5qwwBxbmtvCDKZvj1aFBid71/mYdGRPYZMIxq1zgP1acePC1zfTG/lvuQ7d0Pe0kaw==' >> ~/.ssh/known_hosts
# git push git#heroku.com:yourproject.git $CIRCLE_SHA1:refs/heads/master
# Optional post-deploy commands
# heroku run python manage.py migrate --app=my-heroku-project
fi
- run: mvn package
- run:
name: Install Docker client
command: |
set -x
VER="17.03.0-ce"
curl -L -o /tmp/docker-$VER.tgz https://get.docker.com/builds/Linux/x86_64/docker-$VER.tgz
tar -xz -C /tmp -f /tmp/docker-$VER.tgz
mv /tmp/docker/* /usr/bin
- run:
name: Build Docker image
command: docker build -t joethecoder2/spring-boot-web:$CIRCLE_SHA1 .
- run:
name: Push to DockerHub
command: |
docker login -u$DOCKERHUB_LOGIN -p$DOCKERHUB_PASSWORD
docker push joethecoder2/spring-boot-web:$CIRCLE_SHA1
- run:
name: Setup Heroku
command: |
curl https://cli-assets.heroku.com/install-ubuntu.sh | sh
chmod +x .circleci/setup-heroku.sh
.circleci/setup-heroku.sh
- run:
name: Deploy to Heroku
command: |
mkdir app
cd app/
heroku create
# git push https://heroku:$HEROKU_API_KEY#git.heroku.com/$HEROKU_APP.git master
echo ${HEROKU_API_KEY}
echo ${HEROKU_APP}
git push https://heroku:${HEROKU_API_KEY}#git.heroku.com/${HEROKU_APP}.git master
- store_test_results:
path: target/surefire-reports
- store_artifacts:
path: target/spring-boot-web-0.0.1-SNAPSHOT.jar
The problem is that CircleCI doesn't seem to know what these variables should be set to, and instead just echos.
${HEROKU_API_KEY}
${HEROKU_APP}
The question, and problem is why aren't these settings being detected automatically?
You need to set the value for the variables: https://circleci.com/docs/2.0/env-vars/
They are being echo'd because you're echoing them.
echo ${HEROKU_API_KEY}
echo ${HEROKU_APP}

Are Instrumentation tests for Android Espresso available on CircleCi 2.0?

Are Instrumentation tests for Android Espresso available on CircleCI 2.0?
If yes, can anybody, please, help to configure config.yml file for me?
I’ve made thousand attempts and no luck. I can run unit tests, but not Instrumentation.
Thanks
The answer for this question is: yes. Instrumentation tests are possible for CircleCi. This is the configuration I have:
version: 2
jobs:
build:
working_directory: ~/code
docker:
- image: circleci/android:api-25-alpha
environment:
JVM_OPTS: -Xmx3200m
steps:
- checkout
- restore_cache:
key: jars-{{ checksum "build.gradle" }}-{{ checksum "app/build.gradle" }}
- run:
name: Chmod permissions #if permission for Gradlew Dependencies fail, use this.
command: sudo chmod +x ./gradlew
- run:
name: Download Dependencies
command: ./gradlew androidDependencies
- save_cache:
paths:
- ~/.gradle
key: jars-{{ checksum "build.gradle" }}-{{ checksum "app/build.gradle" }}
- run:
name: Setup emulator
command: sdkmanager "system-images;android-25;google_apis;armeabi-v7a" && echo "no" | avdmanager create avd -n test -k "system-images;android-25;google_apis;armeabi-v7a"
- run:
name: Launch emulator
command: export LD_LIBRARY_PATH=${ANDROID_HOME}/emulator/lib64:${ANDROID_HOME}/emulator/lib64/qt/lib && emulator64-arm -avd test -noaudio -no-boot-anim -no-window -accel on
background: true
- run:
name: Wait emulator
command: |
# wait for it to have booted
circle-android wait-for-boot
# unlock the emulator screen
sleep 30
adb shell input keyevent 82
- run:
name: Run Tests
command: ./gradlew connectedAndroidTest
- store_artifacts:
path: app/build/reports
destination: reports
- store_test_results:
path: app/build/test-results
The only problem with this configuration that it doesn't lead to successfull build because of not enough memory error. If somebody has better configuration, please, share.
I am running Android UI tests on CircleCI MacOS executor.
Here is my configuration:
version: 2
reference:
## Constants
gradle_cache_path: &gradle_cache_path
gradle_cache-{{ checksum "build.gradle" }}-{{ checksum "app/build.gradle" }}
workspace: &workspace
~/src
## Configurations
android_config: &android_config
working_directory: *workspace
macos:
xcode: "9.4.0"
shell: /bin/bash --login -eo pipefail
environment:
TERM: dumb
JVM_OPTS: -Xmx3200m
## Cache
restore_gradle_cache: &restore_gradle_cache
restore_cache:
key: *gradle_cache_path
save_gradle_cache: &save_gradle_cache
save_cache:
key: *gradle_cache_path
paths:
- ~/.gradle
## Dependency Downloads
download_android_dependencies: &download_android_dependencies
run:
name: Download Android Dependencies
command: ./gradlew androidDependencies
jobs:
ui_test:
<<: *android_config
steps:
- checkout
- run:
name: Setup environment variables
command: |
echo 'export PATH="$PATH:/usr/local/opt/node#8/bin:${HOME}/.yarn/bin:${HOME}/${CIRCLE_PROJECT_REPONAME}/node_modules/.bin:/usr/local/share/android-sdk/tools/bin"' >> $BASH_ENV
echo 'export ANDROID_HOME="/usr/local/share/android-sdk"' >> $BASH_ENV
echo 'export ANDROID_SDK_HOME="/usr/local/share/android-sdk"' >> $BASH_ENV
echo 'export ANDROID_SDK_ROOT="/usr/local/share/android-sdk"' >> $BASH_ENV
echo 'export QEMU_AUDIO_DRV=none' >> $BASH_ENV
echo 'export JAVA_HOME=/Library/Java/Home' >> $BASH_ENV
- run:
name: Install Android sdk
command: |
HOMEBREW_NO_AUTO_UPDATE=1 brew tap homebrew/cask
HOMEBREW_NO_AUTO_UPDATE=1 brew cask install android-sdk
- run:
name: Install emulator dependencies
command: (yes | sdkmanager "platform-tools" "platforms;android-26" "extras;intel;Hardware_Accelerated_Execution_Manager" "build-tools;26.0.0" "system-images;android-26;google_apis;x86" "emulator" --verbose) || true
- *restore_gradle_cache
- *download_android_dependencies
- *save_gradle_cache
- run: avdmanager create avd -n Pixel_2_API_26 -k "system-images;android-26;google_apis;x86" -g google_apis -d "Nexus 5"
- run:
name: Run emulator in background
command: /usr/local/share/android-sdk/tools/emulator #Pixel_2_API_26 -skin 1080x2066 -memory 2048 -noaudio
background: true
- run:
name: Run Tests
command: ./gradlew app:connectedAndroidTest
https://gist.github.com/DoguD/58b4b86a5d892130af84074078581b87
I hope it helps

Resources