The .travis.yml file I have errors on line 25 where on: branch: master the exact error is
[x] syntax error: (): mapping values are not allowed in this context at line 25 column 5
Im trying to make two different deployments based on the branch im pushing to.
language: node_js
node_js:
- 10.8.0
cache:
directories:
- node_modules
services:
- mongodb
- docker
before_install:
- pip install --user awscli
- export PATH=$PATH:$HOME/.local/bin
install:
- npm install
before_script:
- npm run lint
script:
- npm run test
deploy:
# Production Deploy if we push to master
- eval $(aws ecr get-login --no-include-email --region us-east-1)
- docker build -t production-my-api .
- docker tag production-my-api:latest 989002346127.dkr.ecr.us-east-1.amazonaws.com/production-my-api:latest
- docker push 989002346127.dkr.ecr.us-east-1.amazonaws.com/production-my-api:latest
on:
branch: master
# Develop Deploy if we push to develop
- eval $(aws ecr get-login --no-include-email --region us-east-1)
- docker build -t develop-my-api .
- docker tag develop-my-api:latest 989002346127.dkr.ecr.us-east-1.amazonaws.com/develop-my-api:latest
- docker push 989002346127.dkr.ecr.us-east-1.amazonaws.com/develop-my-api:latest
on:
branch: develop
env:
global:
- secure: bNqX0zh25lC0LSvmpZqgKoHLyOoFKiHJxo5c52vo3BIJ6vqNAep9Ke5huiYYlA7IEbIBHAQcdYPbrSsSc8mCDtMTgNz5lDVCBvxKeYR6AtBHxMOf34AAjkbW5HK0Gy5KBs1Tj4EhAq0/7hPZF0S8rfuyfGPl2oIsx0MorHGCL9KG7LyAqZj+EXXbb71iky14ZD22yUbHFw8gmVmnX5ANGKYTg6rsX03ppy4V3H8iSG9VtatS5msl1xTG1Rgg1geSX95fMBE9Mzl+v6ZgSWf/WCyjUp5mwoTMJk+CtJrEcpoBQkRoMFa9pJeF/wlIgPuqfv7mbACv6IFiBx35KgONkT0Wvn6iEvbhA7Nxdy2thzAkc8Uo+73lqsjh799akZT4+4clUxilUzGsUbUPnvQf5fJYsO1cU/hIw9qPOFVlrib6tsDCaUQ8kpbo+kx7GgtNrp+AV7IZE0TTiOdpXTMv81vHGBmI5/trNeyEuhVFczlR9Ckk9F2j6x41WRkbYwoxH0xjdUUkl7QLMHz/wD7BjwaFpSCIid5FaNcTEaYU0pPs0d/2O9imyNolY6JFf/Fo6+twOrjKsFcgXrlVh3hA9CYmke1vccC847bzzbpKd3Q+g2q2JDe41HKBjs7AXIUIJZjA6RyV0+UTfCZYV0t3b18T7HkUPlcXpA6qVSOUR8E=
- secure: QN4RvzMHgm/13R86go2ba64JZWtWayaq9j34SWzJQuSwZA0vnVpagX1HYfLXZkkI57AD5uJOJsfQP4jDMF9BXA1SFMCHuUV5Fvnk7r7LlKuZoo58ppaaEXYtk/bBQd19ITZOR7SNEbOrR4Sp0KaimhGNOzJhZ4Uv9nCNvK9A+dfNT5MZ3qrWl6WbQRcvdw6iyz54Qz7Ckz/Lp6zq+VHoyDp3rymoD1wayMSSoodyStXjeftqBCfuRiXwXuo03MEgTyphF4Bbjxxkt9DOLmFuKnuYPK3gZuAShu49pwFSYCG5C0FN51tFtO60ojCLs+J6sLa7w5+U4XNFvvv6egUOBxpm+lHINdPx6LpVV4Qm8uVUJkxR2par3paRdWrPdCyqMBVGM1t+T7LP20F7YmI8Ds5fVfbQmTiKCNz5daP/AXtxrWjivdZq3stCAcloW8X/H11X11GzBDxMRAGUg2+9lpMgpTGeNoDBmeyf1NXwdxBsg1d+ISdqk6ZLIO49RmFqKrgT9qwg7bMcpEMfGmXMrHTyuLMXNVkZSkCq1KT3RlymFUzR1qEQvIFyDVFJhTIpBYlIjFgckDRTh/hLqywq3gEKkcSxlQwyBDp3Rw59nK/xl+TtCMNgj8TDWIw1XELSTRUbPugBZOde6zpV1PMUEsEwERIrQ2La+llKXz0+V1s=
deploy_production.sh
#!/bin/bash
# Deploy to develop
echo "Deploying to ACS"
$(aws ecr get-login --no-include-email --region us-east-1)
docker build -t develop-my-api .
docker tag develop-my-api:latest 989002346127.dkr.ecr.us-east-1.amazonaws.com/develop-my-api:latest
docker push 989002346127.dkr.ecr.us-east-1.amazonaws.com/develop-my-api:latest
and dont forget to change the permissions chmod +x deploy_production.sh
Related
I need help caching packages in my bitbucket pipeline that were installed via apt-get.
For non-apt-get installed packages you can find the path where packages are installed online. However, I'm not sure what directorie(s) to cache for apt-get installed packages.
For example I have the following command in my pipeline script:
apt-get update && apt-get install -y curl unzip git
I defined a cache directory in definitions like so:
caches:
apt-cache: /var/cache/apt
However, it's only caching 164 bytes and I don't think it's caching all of the packages that are actually installed.
Is there a way to find where these packages are installed so I can cache them?
Here is my full pipeline script below:
image: php:8.2-fpm
definitions:
# set the paths for where the packages are installed that we are caching
# these paths are used to download the packages from the cache to speed up deploys
caches:
install-php-extensions: /usr/local/bin/
phpunit: web-app/vendor/bin/
composer: /usr/local/bin
# directory where apt package cache is
apt-cache: /var/cache/apt
php-extensions: /usr/lib/php/
sonar: ~/.sonar
steps:
- step: &testing
name: Test
caches:
- install-php-extensions
- phpunit
- composer
- apt-cache
- php-extensions
services:
- docker
script:
# Install apt packages
- apt-get update && apt-get install -y curl unzip git
# xdebug is needed to run the code coverage later on and to generate the code coverage report
- pecl install xdebug-3.2.0 && echo "zend_extension=$(find /usr/local/lib/php/extensions/ -name xdebug.so)" > /usr/local/etc/php/conf.d/xdebug.ini
# Install php extensions, set permissions to execute, required for snowflake, pdo, etc
# The PDO installation is required later so the composer install doesn't fail with an undefined constant
- curl -sSLf -o /usr/local/bin/install-php-extensions https://github.com/mlocati/docker-php-extension-installer/releases/download/1.5.49/install-php-extensions
- chmod +x /usr/local/bin/install-php-extensions
- install-php-extensions bcmath odbc pdo_odbc soap
# Install phpunit dependencies and run the phpunit tests with code coverage
- curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
- cd web-app
- composer require phpunit/phpunit --dev
- XDEBUG_MODE=coverage vendor/bin/phpunit --testdox -d memory_limit=-1 --log-junit test-results/test-execution-results.xml --cache-result --coverage-cache=./coverage/cache --coverage-clover=phpunit-coverage.xml tests/Unit
artifacts:
- test-results/test-execution-results.xml
- step: &sonarqube
name: Sonarqube coverage report
caches:
- sonar
script:
- cd web-app
- pipe: sonarsource/sonarqube-scan:1.0.0
variables:
SONAR_HOST_URL: ${SONAR_HOST_URL} # Get the value from the repository/workspace variable.
SONAR_TOKEN: ${SONAR_TOKEN}
DEBUG: "true"
- step: &deploy
name: Deploy
caches:
- docker
- apt-cache
services:
- docker
script:
# Install apt packages
- apt-get update && apt-get install -y unzip awscli
- TAG=${BITBUCKET_COMMIT}
# Set aws credentials
- aws configure set aws_access_key_id "${AWS_ACCESS_KEY}"
- aws configure set aws_secret_access_key "${AWS_SECRET_KEY}"
- aws configure set region "${AWS_REGION}"
# Get credentials for laravel from secrets manager and
# Write to .env file
- aws secretsmanager get-secret-value --secret-id ${ENV_SECRET_ID} --query SecretString --output text >> .env
# Write odbc snowflake definition for connecting to database
- aws secretsmanager get-secret-value --secret-id ${SNOWFLAKE_SECRET_ID} --query SecretString --output text > ./docker/php/snowflake/odbc.ini
# Authenticate bitbucket-deployment user
- aws ecr get-login-password --region us-west-2 | docker login -u AWS --password-stdin ${AWS_ACCOUNT_ID}.dkr.ecr.us-west-2.amazonaws.com
# Build/deploy nginx image
- NGINX_IMAGE="${AWS_ACCOUNT_ID}.dkr.ecr.us-west-2.amazonaws.com/nova/nginx"
- docker build -f Dockerfile-nginx -t $NGINX_IMAGE .
# Push the :latest image
- docker push $NGINX_IMAGE:latest
# Tag and push the image with bitbucket commit
- docker tag $NGINX_IMAGE $NGINX_IMAGE:${BITBUCKET_COMMIT}
- docker push $NGINX_IMAGE:${BITBUCKET_COMMIT}
# Build/deploy php image
- PHP_IMAGE="${AWS_ACCOUNT_ID}.dkr.ecr.us-west-2.amazonaws.com/nova/php-app"
- docker build -f Dockerfile-php -t $PHP_IMAGE .
# Push the :latest image
- docker push $PHP_IMAGE:latest
# Tag and push the image with bitbucket commit
- docker tag $PHP_IMAGE $PHP_IMAGE:${BITBUCKET_COMMIT}
- docker push $PHP_IMAGE:${BITBUCKET_COMMIT}
# Start ecs migration task
- aws ecs run-task --cluster nova-api-cluster --launch-type FARGATE --network-configuration "awsvpcConfiguration={subnets=['${PUBLIC_SUBNET_A}','${PUBLIC_SUBNET_B}'],securityGroups=['${SECURITY_GROUP}'],assignPublicIp=ENABLED}" --task-definition nova-api-migration-task
# Force new ecs task deployment
- aws ecs update-service --cluster ${CLUSTER_NAME} --service ${SERVICE_NAME} --region ${AWS_REGION} --force-new-deployment
- step: &auto_merge_down
name: Auto Merge Down
image: atlassian/default-image:3
script:
- ./autoMerge.sh stage || true
- ./autoMerge.sh dev || true
pipelines:
branches:
dev:
- step:
<<: *testing
- step:
<<: *deploy
deployment: Dev
stage:
- step:
<<: *testing
- step:
<<: *deploy
deployment: Staging
prod:
- step:
<<: *testing
- step:
<<: *sonarqube
- step:
<<: *deploy
deployment: Production
- step:
<<: *auto_merge_down
Found another answer on the community here https://community.atlassian.com/t5/Bitbucket-questions/Any-way-to-cache-apt-get-install-y-zip-in-bitbucket-pipelines/qaq-p/622876, thanks #Chase Han.
Basically you use the following command in your pipeline script or a local docker image that matches the image you have in the pipeline.
which <package-name-here>
e.g
which git
Then it will output a path where it exists.
e.g.
/usr/bin/git
Then you just need to include the path in your cache definitions that contains that package.
e.g.
caches:
#/usr/bin located packages like git, curl, etc
usr-bin: /usr/bin
And then you can use that cache definition in your steps
This is my first time trying to CI to Google Cloud from Gitlab, so far has been this journey very painful, but I think I'm closer.
I follow some instructions from:
https://medium.com/google-cloud/deploy-to-cloud-run-using-gitlab-ci-e056685b8eeb
and I change to my needs the .gitlab-ci and the cloudbuild.yaml
After several tryouts, I finally manage to set all the Roles, Permissions and Service Accounts. But no luck building my docker file into the Container Registry or Artifact.
this is my failure log from gitlab log:
Running with gitlab-runner 14.6.0~beta.71.gf035ecbf (f035ecbf)
on green-3.shared.runners-manager.gitlab.com/default Jhc_Jxvh
Preparing the "docker+machine" executor
Using Docker executor with image google/cloud-sdk:latest ...
Pulling docker image google/cloud-sdk:latest ...
Using docker image sha256:2ec5b4332b2fb4c55f8b70510b82f18f50cbf922f07be59de3e7f93937f3d37f for google/cloud-sdk:latest with digest google/cloud-sdk#sha256:e268d9116c9674023f4f6aff680987f8ee48d70016f7e2f407fe41e4d57b85b1 ...
Preparing environment
Running on runner-jhcjxvh-project-32231297-concurrent-0 via runner-jhcjxvh-shared-1641939667-f7d79e2f...
Getting source from Git repository
$ eval "$CI_PRE_CLONE_SCRIPT"
Fetching changes with git depth set to 50...
Initialized empty Git repository in /builds/ProjectsD/node-projects/.git/
Created fresh repository.
Checking out 1f1e41f0 as dev...
Skipping Git submodules setup
Executing "step_script" stage of the job script
Using docker image sha256:2ec5b4332b2fb4c55f8b70510b82f18f50cbf922f07be59de3e7f93937f3d37f for google/cloud-sdk:latest with digest google/cloud-sdk#sha256:e268d9116c9674023f4f6aff680987f8ee48d70016f7e2f407fe41e4d57b85b1 ...
$ echo $GCP_SERVICE_KEY > gcloud-service-key.json
$ gcloud auth activate-service-account --key-file=gcloud-service-key.json
Activated service account credentials for: [gitlab-ci-cd#pdnodejs.iam.gserviceaccount.com]
$ gcloud config set project $GCP_PROJECT_ID
Updated property [core/project].
$ gcloud builds submit . --config=cloudbuild.yaml
Creating temporary tarball archive of 47 file(s) totalling 100.8 MiB before compression.
Some files were not included in the source upload.
Check the gcloud log [/root/.config/gcloud/logs/2022.01.11/22.23.29.855708.log] to see which files and the contents of the
default gcloudignore file used (see `$ gcloud topic gcloudignore` to learn
more).
Uploading tarball of [.] to [gs://pdnodejs_cloudbuild/source/1641939809.925215-a19e660f1d5040f3ac949d2eb5766abb.tgz]
Created [https://cloudbuild.googleapis.com/v1/projects/pdnodejs/locations/global/builds/577417e7-67b9-419e-b61b-f1be8105dd5a].
Logs are available at [https://console.cloud.google.com/cloud-build/builds/577417e7-67b9-419e-b61b-f1be8105dd5a?project=484193191648].
gcloud builds submit only displays logs from Cloud Storage. To view logs from Cloud Logging, run:
gcloud beta builds submit
BUILD FAILURE: Build step failure: build step 1 "gcr.io/cloud-builders/docker" failed: step exited with non-zero status: 1
ERROR: (gcloud.builds.submit) build 577417e7-67b9-419e-b61b-f1be8105dd5a completed with status "FAILURE"
Cleaning up project directory and file based variables
00:01
ERROR: Job failed: exit code 1
.gitlab-ci
# file: .gitlab-ci.yml
stages:
# - docker-build
- deploy_dev
# docker-build:
# stage: docker-build
# image: docker:latest
# services:
# - docker:dind
# before_script:
# - echo $CI_BUILD_TOKEN | docker login -u "$CI_REGISTRY_USER" --password-stdin $CI_REGISTRY
# script:
# - docker build --pull -t "$CI_REGISTRY_IMAGE" .
# - docker push "$CI_REGISTRY_IMAGE"
deploy_dev:
stage: deploy_dev
image: google/cloud-sdk:latest
script:
- echo $GCP_SERVICE_KEY > gcloud-service-key.json # google cloud service accounts
- gcloud auth activate-service-account --key-file=gcloud-service-key.json
- gcloud config set project $GCP_PROJECT_ID
- gcloud builds submit . --config=cloudbuild.yaml
cloudbuild.yaml
# File: cloudbuild.yaml
steps:
# build the container image
- name: 'gcr.io/cloud-builders/docker'
args: [ 'build', '-t', 'gcr.io/$PROJECT_ID/node-projects', '.' ]
# push the container image
- name: 'gcr.io/cloud-builders/docker'
args: [ 'push', 'gcr.io/$PROJECT_ID/node-projects']
# deploy to Cloud Run
- name: "gcr.io/cloud-builders/gcloud"
args: ['run', 'deploy', 'erp-ui', '--image', 'gcr.io/$PROJECT_ID/node-projects', '--region', 'us-central4', '--platform', 'managed', '--allow-unauthenticated']
options:
logging: CLOUD_LOGGING_ONLY
Is there any other configuration I'm missing inside GCP? or is something wrong with my files?
😮💨
UPDATE: I try and Success finally
I start to move around everything from scrath and I now achieve the correct deploy
.gitlab-ci
stages:
- build
- push
default:
image: docker:latest
services:
- docker:dind
before_script:
- echo $CI_BUILD_TOKEN | docker login -u "$CI_REGISTRY_USER" --password-stdin $CI_REGISTRY
docker-build:
stage: build
only:
refs:
- main
- dev
script:
- |
if [[ "$CI_COMMIT_BRANCH" == "$CI_DEFAULT_BRANCH" ]]; then
tag=""
echo "Running on default branch '$CI_DEFAULT_BRANCH': tag = 'latest'"
else
tag=":$CI_COMMIT_REF_SLUG"
echo "Running on branch '$CI_COMMIT_BRANCH': tag = $tag"
fi
- docker build --pull -t "$CI_REGISTRY_IMAGE${tag}" .
- docker push "$CI_REGISTRY_IMAGE${tag}"
# Run this job in a branch where a Dockerfile exists
interruptible: true
environment:
name: build/$CI_COMMIT_REF_NAME
push:
stage: push
only:
refs:
- main
- dev
script:
- apk upgrade --update-cache --available
- apk add openssl
- apk add curl python3 py-crcmod bash libc6-compat
- rm -rf /var/cache/apk/*
- curl https://sdk.cloud.google.com | bash > /dev/null
- export PATH=$PATH:/root/google-cloud-sdk/bin
- echo $GCP_SERVICE_KEY > gcloud-service-key-push.json # Google Cloud service accounts
- gcloud auth activate-service-account --key-file gcloud-service-key-push.json
- gcloud config set project $GCP_PROJECT_ID
- gcloud auth configure-docker us-central1-docker.pkg.dev
- tag=":$CI_COMMIT_REF_SLUG"
- docker pull "$CI_REGISTRY_IMAGE${tag}"
- docker tag "$CI_REGISTRY_IMAGE${tag}" us-central1-docker.pkg.dev/$GCP_PROJECT_ID/node-projects/node-js-app${tag}
- docker push us-central1-docker.pkg.dev/$GCP_PROJECT_ID/node-projects/node-js-app${tag}
environment:
name: push/$CI_COMMIT_REF_NAME
when: on_success
.cloudbuild.yaml
# File: cloudbuild.yaml
steps:
# build the container image
- name: 'gcr.io/cloud-builders/docker'
args:
[
'build',
'-t',
'us-central1-docker.pkg.dev/$PROJECT_ID/node-projects/nodejsapp',
'.',
]
# push the container image
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'us-central1-docker.pkg.dev/$PROJECT_ID/node-projects/nodejsapp']
# deploy to Cloud Run
- name: 'gcr.io/cloud-builders/gcloud'
args:
[
'beta',
'run',
'deploy',
'dreamslear',
'--image',
'us-central1-docker.pkg.dev/$PROJECT_ID/node-projects/nodejsapp',
'--region',
'us-central1',
'--platform',
'managed',
'--port',
'3000',
'--allow-unauthenticated',
]
And that worked!
if someone wants to give an optimised workflow or any advice, that would be great!
I set up a codebuild for a python project with dependencies that takes to long to build. So I enabled artifact cache for docker layers. This works fine but only last for a short while and will invalidate cache for builds 15mins apart. Another solution I thought of was to pull the docker image on pre_build step but it doesn't seem to work. My buildspec:
version: 0.2
env:
secrets-manager:
DOCKERHUB_ID: arn:aws:secretsmanager:■■■■■■:■■■■■■:■■■■■■:■■■■■■/■■■■■■:■■■■■■
DOCKERHUB_TOKEN: arn:aws:secretsmanager:■■■■■■:■■■■■■:■■■■■■:■■■■■■/■■■■■■:■■■■■■
phases:
pre_build:
commands:
- echo Logging in to Amazon ECR...
- aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com
- echo Logging in to Docker Hub...
- echo $DOCKERHUB_TOKEN | docker login -u $DOCKERHUB_ID --password-stdin
- docker pull $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME || true
build:
commands:
- echo Build started on `date`
- echo Building the Docker image on branch $CODEBUILD_WEBHOOK_HEAD_REF ...
- touch .env
- echo $ENV_PREFIX$IMAGE_REPO_NAME:$IMAGE_TAG
- docker build --cache-from $IMAGE_REPO_NAME:$IMAGE_TAG --build-arg BUILD_SECRET_KEY=$SECRET_KEY -t $IMAGE_REPO_NAME:$IMAGE_TAG -f docker/django/Dockerfile .
- docker tag $ENV_PREFIX$IMAGE_REPO_NAME:$IMAGE_TAG $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG
post_build:
commands:
- echo Build completed on `date`
- echo Pushing the Docker image...
- docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG
- IMAGE_DIFINITION_APP="{\"name\":\"${CONTAINER_NAME}\",\"imageUri\":\"${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_DEFAULT_REGION}.amazonaws.com/${IMAGE_REPO_NAME}:${IMAGE_TAG}\"}"
- echo "[${IMAGE_DIFINITION_APP}]" > imagedefinitions.json
artifacts:
files: imagedefinitions.json
I can successfully pull the image on pre_build but on the build step it gives me this error
#7 ERROR: pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
The role I'm using already grants all privilege to ECR. Is there any other permission I'm missing?
Any help is greatly appreciated.
take a look here:
ECR polices
Maybe you did not add permission in ECR policy.
It took me a while to understand what went wrong but instead of:
- docker build --cache-from $IMAGE_REPO_NAME:$IMAGE_TAG --build-arg BUILD_SECRET_KEY=$SECRET_KEY -t $IMAGE_REPO_NAME:$IMAGE_TAG -f docker/django/Dockerfile .
The registry name should be added before the repo name otherwise it will search docker hub instead of ecr:
- docker build --cache-from $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME --build-arg BUILD_SECRET_KEY=$SECRET_KEY -t $IMAGE_REPO_NAME:$IMAGE_TAG -f docker/django/Dockerfile .
I am building my project on CircleCI and I have a build job that looks like this:
build:
<<: *defaults
steps:
- checkout
- setup_remote_docker
- run:
name: Install pip
command: curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py && sudo python get-pip.py
- run:
name: Install AWS CLI
command: curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip" && unzip awscli-bundle.zip && sudo ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws
- run:
name: Login to Docker Registry
command: aws ecr get-login --no-include-email --region us-east-1 | sh
- run:
name: Install Dep
command: curl https://raw.githubusercontent.com/golang/dep/master/install.sh | sh
- run:
name: Save Version Number
command: echo "export VERSION_NUM=${CIRCLE_TAG}.${CIRCLE_BUILD_NUM}" > deployment/dev/.env
- run:
name: Build App
command: source deployment/dev/.env && docker-compose -f deployment/dev/docker-compose.yml build
- run:
name: Test App
command: |
git config --global url."https://${GITHUB_PERSONAL_ACCESS_TOKEN} :x-oauth-basic#github.com/".insteadOf "https://github.com/"
dep ensure
go test -v ./...
- run:
name: Push Image
command: |
if [[ "${CIRCLE_TAG}" =~ ^v[0.9]+(\.[0-9]+)*-[a-z]*$ ]]; then
source deployment/dev/.env
docker-compose -f deployment/dev/docker-compose.yml push
else
echo 'No tag, not deploying'
fi
- persist_to_workspace:
root: .
paths:
- deployment/*
- tools/*
When I push a change to a branch, the build fails every time with Couldn't connect to Docker daemon at ... - is it running? when it reaches the Build App step of the build job.
Please help me figure out why branch builds are failing but tag builds are not.
I suspect you are hitting this docker-compose bug: https://github.com/docker/compose/issues/6050
The bug reports a misleading error (the one you're getting) when an image name in the docker-compose file is invalid.
If you use an environment variable for the image name or image tag, and that variable is set from a branch name, then it would fail on some branches, but not others.
The problem was occurring on the Save Version Number step. Sometimes that version would be .${CIRCLE_BUILD_NUM} since no tag was passed. Docker dislikes these tags starting with ., so I added a conditional check to see if CIRCLE_TAG was empty, and if it was, use some default version: v0.1.0-build.
I am a CircleCI user, and I am setting up an integration with Heroku.
I want to do the following, and setup security with integrations with dockerHub and also to Heroku from the CircleCI portal page, using this config.yml file.
The problem is that CircleCI doesn't seem to know what these variables should be set to, and instead just echos.
${HEROKU_API_KEY} ${HEROKU_APP}
config.yml
version: 2
jobs:
build:
working_directory: ~/springboot_swagger_example-master-cassandra
docker:
- image: circleci/openjdk:8-jdk-browsers
steps:
- checkout
- restore_cache:
key: springboot_swagger_example-master-cassandra-{{ checksum "pom.xml" }}
- run: mvn dependency:go-offline
- save_cache:
paths:
- ~/.m2
key: springboot_swagger_example-master-cassandra-{{ checksum "pom.xml" }}
- type: add-ssh-keys
- type: deploy
name: "Deploy to Heroku"
command: |
if [ "${CIRCLE_BRANCH}" == "master" ]; then
# Install Heroku fingerprint (this is heroku's own key, NOT any of your private or public keys)
echo 'heroku.com ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAu8erSx6jh+8ztsfHwkNeFr/SZaSOcvoa8AyMpaerGIPZDB2TKNgNkMSYTLYGDK2ivsqXopo2W7dpQRBIVF80q9mNXy5tbt1WE04gbOBB26Wn2hF4bk3Tu+BNMFbvMjPbkVlC2hcFuQJdH4T2i/dtauyTpJbD/6ExHR9XYVhdhdMs0JsjP/Q5FNoWh2ff9YbZVpDQSTPvusUp4liLjPfa/i0t+2LpNCeWy8Y+V9gUlDWiyYwrfMVI0UwNCZZKHs1Unpc11/4HLitQRtvuk0Ot5qwwBxbmtvCDKZvj1aFBid71/mYdGRPYZMIxq1zgP1acePC1zfTG/lvuQ7d0Pe0kaw==' >> ~/.ssh/known_hosts
# git push git#heroku.com:yourproject.git $CIRCLE_SHA1:refs/heads/master
# Optional post-deploy commands
# heroku run python manage.py migrate --app=my-heroku-project
fi
- run: mvn package
- run:
name: Install Docker client
command: |
set -x
VER="17.03.0-ce"
curl -L -o /tmp/docker-$VER.tgz https://get.docker.com/builds/Linux/x86_64/docker-$VER.tgz
tar -xz -C /tmp -f /tmp/docker-$VER.tgz
mv /tmp/docker/* /usr/bin
- run:
name: Build Docker image
command: docker build -t joethecoder2/spring-boot-web:$CIRCLE_SHA1 .
- run:
name: Push to DockerHub
command: |
docker login -u$DOCKERHUB_LOGIN -p$DOCKERHUB_PASSWORD
docker push joethecoder2/spring-boot-web:$CIRCLE_SHA1
- run:
name: Setup Heroku
command: |
curl https://cli-assets.heroku.com/install-ubuntu.sh | sh
chmod +x .circleci/setup-heroku.sh
.circleci/setup-heroku.sh
- run:
name: Deploy to Heroku
command: |
mkdir app
cd app/
heroku create
# git push https://heroku:$HEROKU_API_KEY#git.heroku.com/$HEROKU_APP.git master
echo ${HEROKU_API_KEY}
echo ${HEROKU_APP}
git push https://heroku:${HEROKU_API_KEY}#git.heroku.com/${HEROKU_APP}.git master
- store_test_results:
path: target/surefire-reports
- store_artifacts:
path: target/spring-boot-web-0.0.1-SNAPSHOT.jar
The problem is that CircleCI doesn't seem to know what these variables should be set to, and instead just echos.
${HEROKU_API_KEY}
${HEROKU_APP}
The question, and problem is why aren't these settings being detected automatically?
You need to set the value for the variables: https://circleci.com/docs/2.0/env-vars/
They are being echo'd because you're echoing them.
echo ${HEROKU_API_KEY}
echo ${HEROKU_APP}