I'm trying to get an automatic build going via Travis-CI to google cloud, but when I'm trying to run "gcloud docker any_command", I get the message
ERROR: (gcloud) Invalid choice: 'docker'. Did you mean 'config'?
and when I try to install docker with "gcloud components install docker" I get
You cannot perform this action because the component manager has been
disabled for this installation. If you would like get the latest
version of the Google Cloud SDK, please see our main download page at:
https://developers.google.com/cloud/sdk/
ERROR: (gcloud.components.update) The component manager is disabled for this installation
This is my .travis.yml file -
sudo: required
language: python
python:
- "2.7"
deploy:
provider: gae
keyfile: client-secret.json
project: galvanic-being-138423
notifications:
email: false
services:
- docker
cache:
directories:
- $HOME/google-cloud-sdk/
env:
- GAE_PYTHONPATH=${HOME}/.cache/google_appengine PATH=$PATH:${HOME}/google-cloud-sdk/bin
PYTHONPATH=${PYTHONPATH}:${GAE_PYTHONPATH} CLOUDSDK_CORE_DISABLE_PROMPTS=1
before_install:
- openssl aes-256-cbc -K $encrypted_3ae578884e67_key -iv $encrypted_3ae578884e67_iv
-in credentials.tar.gz.enc -out credentials.tar.gz -d
- rm -rf ${HOME}/google-cloud-sdk/
- curl https://sdk.cloud.google.com | bash;
- ls -l ${HOME}/google-cloud-sdk/bin
- which gcloud
- gcloud --version
- if [ ! -d "${GAE_PYTHONPATH}" ]; then python scripts/fetch_gae_sdk.py $(dirname
"${GAE_PYTHONPATH}"); fi
- if [ ! -d ${HOME}/google-cloud-sdk ]; then curl https://sdk.cloud.google.com | bash;
fi
install:
- pip install pyOpenSSL
- sudo rm -rf /opt/google-cloud-sdk/
- export CLOUDSDK_CORE_DISABLE_PROMPTS=1
- export CLOUDSDK_PYTHON_SITEPACKAGES=1
- tar -xzf credentials.tar.gz
- mkdir -p lib
- gcloud auth activate-service-account galvanic-being-138423#appspot.gserviceaccount.com --key-file client-secret.json
- gcloud config set project galvanic-being-138423
- gcloud config set compute/zone europe-west1-c
- gcloud config set container/cluster example-cluster
- ssh-keygen -q -N "" -f ~/.ssh/google_compute_engine
- gcloud init galvanic-being-138423
- gcloud components update
- gcloud components install docker
- curl -L https://github.com/kubernetes/kubernetes/releases/download/v1.3.3/kubernetes.tar.gz > kubernetes.tar.gz
- tar -xf kubernetes.tar.gz
- sudo cp kubernetes/platforms/linux/amd64/kubectl /usr/local/bin/kubectl
- sudo chmod +x /usr/local/bin/kubectl
- docker pull wordpress:latest
- docker build -t gcr.io/galvanic-being-138423/wordpress-testing:v1 docker/
- gcloud docker push gcr.io/galvanic-being-138423/wordpress-testing:v1
- kubectl config set-cluster example-cluster --server=http://galvanic-being-138423.appspot.com
- kubectl config set-context example-cluster --cluster=example-cluster
- kubectl config use-context example-cluster
- kubectl run wordpress-testing --image=gcr.io/galvanic-being-138423/wordpress-testing:v1 --port=80
EDIT:
gcloud --version gives:
Google Cloud SDK 0.9.37
bq 2.0.18
bq-nix 2.0.18
compute 2014.11.25
core 2014.11.25
core-nix 2014.11.25
dns 2014.11.25
gcutil 1.16.5
gcutil-nix 1.16.5
gsutil 4.6
gsutil-nix 4.6
sql 2014.11.25
Related
I need help caching packages in my bitbucket pipeline that were installed via apt-get.
For non-apt-get installed packages you can find the path where packages are installed online. However, I'm not sure what directorie(s) to cache for apt-get installed packages.
For example I have the following command in my pipeline script:
apt-get update && apt-get install -y curl unzip git
I defined a cache directory in definitions like so:
caches:
apt-cache: /var/cache/apt
However, it's only caching 164 bytes and I don't think it's caching all of the packages that are actually installed.
Is there a way to find where these packages are installed so I can cache them?
Here is my full pipeline script below:
image: php:8.2-fpm
definitions:
# set the paths for where the packages are installed that we are caching
# these paths are used to download the packages from the cache to speed up deploys
caches:
install-php-extensions: /usr/local/bin/
phpunit: web-app/vendor/bin/
composer: /usr/local/bin
# directory where apt package cache is
apt-cache: /var/cache/apt
php-extensions: /usr/lib/php/
sonar: ~/.sonar
steps:
- step: &testing
name: Test
caches:
- install-php-extensions
- phpunit
- composer
- apt-cache
- php-extensions
services:
- docker
script:
# Install apt packages
- apt-get update && apt-get install -y curl unzip git
# xdebug is needed to run the code coverage later on and to generate the code coverage report
- pecl install xdebug-3.2.0 && echo "zend_extension=$(find /usr/local/lib/php/extensions/ -name xdebug.so)" > /usr/local/etc/php/conf.d/xdebug.ini
# Install php extensions, set permissions to execute, required for snowflake, pdo, etc
# The PDO installation is required later so the composer install doesn't fail with an undefined constant
- curl -sSLf -o /usr/local/bin/install-php-extensions https://github.com/mlocati/docker-php-extension-installer/releases/download/1.5.49/install-php-extensions
- chmod +x /usr/local/bin/install-php-extensions
- install-php-extensions bcmath odbc pdo_odbc soap
# Install phpunit dependencies and run the phpunit tests with code coverage
- curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
- cd web-app
- composer require phpunit/phpunit --dev
- XDEBUG_MODE=coverage vendor/bin/phpunit --testdox -d memory_limit=-1 --log-junit test-results/test-execution-results.xml --cache-result --coverage-cache=./coverage/cache --coverage-clover=phpunit-coverage.xml tests/Unit
artifacts:
- test-results/test-execution-results.xml
- step: &sonarqube
name: Sonarqube coverage report
caches:
- sonar
script:
- cd web-app
- pipe: sonarsource/sonarqube-scan:1.0.0
variables:
SONAR_HOST_URL: ${SONAR_HOST_URL} # Get the value from the repository/workspace variable.
SONAR_TOKEN: ${SONAR_TOKEN}
DEBUG: "true"
- step: &deploy
name: Deploy
caches:
- docker
- apt-cache
services:
- docker
script:
# Install apt packages
- apt-get update && apt-get install -y unzip awscli
- TAG=${BITBUCKET_COMMIT}
# Set aws credentials
- aws configure set aws_access_key_id "${AWS_ACCESS_KEY}"
- aws configure set aws_secret_access_key "${AWS_SECRET_KEY}"
- aws configure set region "${AWS_REGION}"
# Get credentials for laravel from secrets manager and
# Write to .env file
- aws secretsmanager get-secret-value --secret-id ${ENV_SECRET_ID} --query SecretString --output text >> .env
# Write odbc snowflake definition for connecting to database
- aws secretsmanager get-secret-value --secret-id ${SNOWFLAKE_SECRET_ID} --query SecretString --output text > ./docker/php/snowflake/odbc.ini
# Authenticate bitbucket-deployment user
- aws ecr get-login-password --region us-west-2 | docker login -u AWS --password-stdin ${AWS_ACCOUNT_ID}.dkr.ecr.us-west-2.amazonaws.com
# Build/deploy nginx image
- NGINX_IMAGE="${AWS_ACCOUNT_ID}.dkr.ecr.us-west-2.amazonaws.com/nova/nginx"
- docker build -f Dockerfile-nginx -t $NGINX_IMAGE .
# Push the :latest image
- docker push $NGINX_IMAGE:latest
# Tag and push the image with bitbucket commit
- docker tag $NGINX_IMAGE $NGINX_IMAGE:${BITBUCKET_COMMIT}
- docker push $NGINX_IMAGE:${BITBUCKET_COMMIT}
# Build/deploy php image
- PHP_IMAGE="${AWS_ACCOUNT_ID}.dkr.ecr.us-west-2.amazonaws.com/nova/php-app"
- docker build -f Dockerfile-php -t $PHP_IMAGE .
# Push the :latest image
- docker push $PHP_IMAGE:latest
# Tag and push the image with bitbucket commit
- docker tag $PHP_IMAGE $PHP_IMAGE:${BITBUCKET_COMMIT}
- docker push $PHP_IMAGE:${BITBUCKET_COMMIT}
# Start ecs migration task
- aws ecs run-task --cluster nova-api-cluster --launch-type FARGATE --network-configuration "awsvpcConfiguration={subnets=['${PUBLIC_SUBNET_A}','${PUBLIC_SUBNET_B}'],securityGroups=['${SECURITY_GROUP}'],assignPublicIp=ENABLED}" --task-definition nova-api-migration-task
# Force new ecs task deployment
- aws ecs update-service --cluster ${CLUSTER_NAME} --service ${SERVICE_NAME} --region ${AWS_REGION} --force-new-deployment
- step: &auto_merge_down
name: Auto Merge Down
image: atlassian/default-image:3
script:
- ./autoMerge.sh stage || true
- ./autoMerge.sh dev || true
pipelines:
branches:
dev:
- step:
<<: *testing
- step:
<<: *deploy
deployment: Dev
stage:
- step:
<<: *testing
- step:
<<: *deploy
deployment: Staging
prod:
- step:
<<: *testing
- step:
<<: *sonarqube
- step:
<<: *deploy
deployment: Production
- step:
<<: *auto_merge_down
Found another answer on the community here https://community.atlassian.com/t5/Bitbucket-questions/Any-way-to-cache-apt-get-install-y-zip-in-bitbucket-pipelines/qaq-p/622876, thanks #Chase Han.
Basically you use the following command in your pipeline script or a local docker image that matches the image you have in the pipeline.
which <package-name-here>
e.g
which git
Then it will output a path where it exists.
e.g.
/usr/bin/git
Then you just need to include the path in your cache definitions that contains that package.
e.g.
caches:
#/usr/bin located packages like git, curl, etc
usr-bin: /usr/bin
And then you can use that cache definition in your steps
I am working on Apollo Federation. So far, I have successfully deployed my services to google kubernetes cluster using travis.
The only remaining issue is, including the apollo service:push --serviceURL=http://auth-cluster-ip-service --serviceName=auth script in my CI/CD. But I have no idea how.. Its my first time setting up CI/CD.
My working travis config file without apollo service:push is:
sudo: required
services:
- docker
env:
global:
- SHA=$(git rev-parse HEAD)
- CLOUDSDK_CORE_DISABLE_PROMPTS=1
language: node_js
node_js:
- 10
before_install:
- openssl aes-256-cbc -K $encrypted_9f3b5599b056_key -iv $encrypted_9f3b5599b056_iv -in service-account.json.enc -out service-account.json -d
- curl https://sdk.cloud.google.com | bash > /dev/null;
- source $HOME/google-cloud-sdk/path.bash.inc
- gcloud components update kubectl
- gcloud auth activate-service-account --key-file service-account.json
- gcloud config set project salading-production
- gcloud config set compute/zone asia-northeast3-a
- gcloud container clusters get-credentials salading-cluster
- echo "$SHA"
- echo "$DOCKER_PASSWORD" | docker login -u "$DOCKER_USERNAME" --password-stdin
script:
- echo "skipping tests"
deploy:
provider: script
script: bash ./deploy.sh
on:
branch: master
Below is the deploy.sh file:
docker build -t hoffnung8493/salading-auth:latest -t hoffnung8493/salading-auth:$SHA .
docker push hoffnung8493/salading-auth:latest
docker push hoffnung8493/salading-auth:$SHA
kubectl apply -f k8s
kubectl set image deployment/auth-deployment auth=hoffnung8493/salading-auth:$SHA
I tried adding two lines in deploy.sh:
npm i -g apollo
apollo service:push --serviceURL=http://auth-cluster-ip-service --serviceName=auth --endpoint=http://auth-cluster-ip-service
and got following errors:
Loading Apollo Project [started]
Loading Apollo Project [completed]
Uploading service to Apollo Graph Manager [started]
Fetching info from federated service
Uploading service to Apollo Graph Manager [failed]
→ request to http://auth-cluster-ip-service/ failed, reason: getaddrinfo ENOTFOUND auth-cluster-ip-service auth-cluster-ip-service:80
FetchError: request to http://auth-cluster-ip-service/ failed, reason: getaddrinfo ENOTFOUND auth-cluster-ip-service auth-cluster-ip-service:80
at ClientRequest.<anonymous> (~/.nvm/versions/node/v10.19.0/lib/node_modules/apollo/node_modules/node-fetch/lib/index.js:1455:11)
Ok, while I was typing my own question in stackoverflow, I came up with the solution. Since, I already finished typing my question, I decided to share the solution.
It turns out an easy solution is simply adding 5 lines of code in after_deploy in .travis.yml file.
npm run dev &: this runs the node server in background
Afterwards sleep 3 gives the server some time to turn on.
Finally the last code pushes the new graphql schema to Apollo Graph Manager.
Note that in your travis-ci.com's settings, you have to add apollo's ENGINE_API_KEY as environment variable. Also note that your node server can print out some connection errors. In my case I did not provide, redis and mongodb connection related environment variables. But as long as the server itself is running for introspection, apollo service:push will work fine.
sudo: required
services:
- docker
env:
global:
- SHA=$(git rev-parse HEAD)
- CLOUDSDK_CORE_DISABLE_PROMPTS=1
language: node_js
node_js:
- 10
before_install:
- openssl aes-256-cbc -K $encrypted_9f3b5599b056_key -iv $encrypted_9f3b5599b056_iv -in service-account.json.enc -out service-account.json -d
- curl https://sdk.cloud.google.com | bash > /dev/null;
- source $HOME/google-cloud-sdk/path.bash.inc
- gcloud components update kubectl
- gcloud auth activate-service-account --key-file service-account.json
- gcloud config set project salading-production
- gcloud config set compute/zone asia-northeast3-a
- gcloud container clusters get-credentials salading-cluster
- echo "$SHA"
- echo "$DOCKER_PASSWORD" | docker login -u "$DOCKER_USERNAME" --password-stdin
script:
- echo "skipping tests"
deploy:
provider: script
script: bash ./deploy.sh
on:
branch: master
after_deploy:
- npm install
- npm i -g apollo
- npm run dev &
- sleep 3
- apollo service:push --serviceURL=http://auth-cluster-ip-service --serviceName=auth --endpoint=http://localhost:3051
I am trying to deploy a kubernetes cluster using Travis CI and I get the following error
EDIT:
invalid argument "myAcc/imgName:" for t: invalid reference format
See docker build --help
./deploy.sh: line 1: kubectl: command not found
This is my travis config file
travis.yml
sudo: required
services:
- docker
env:
global:
- SHA-$(git rev-parse HEAD)
- CLOUDSDK_CORE_DISABLE_PROMPTS=1
before-install:
- openssl aes-256-cbc -K $encrypted_0c35eebf403c_key -iv $encrypted_0c35eebf403c_iv -in service-account.json.enc -out service-account.json -d
- curl https://sdk.cloud.google.com | bash > /dev/null
- source $HOME/google-cloud-sdk/path.bash.inc
- gcloud components update kubectl
- gcloud auth activate-service-account --key-file service-account.json
- gcloud config set project robust-chess-234104
- gcloud config set compute/zone asia-south1-a
- gcloud container clusters get-credentials standard-cluster-1
- echo "$DOCKER_PASSWORD" | docker login -u "$DOCKER_USERNAME" --password-stdin
deploy:
provider: script
script: bash ./deploy.sh
on:
branch: master
This is my deploy script
deploy.sh
doccker build -t myAcc/imgName:$SHA
docker push myAcc/imgName:$SHA
kubectl apply -k8s
I guess the gcloud components update kubectl command is not working. Any ideas?
Thanks !
The first issue invalid argument "myAcc/imgName:" for t: invalid reference format because the variable $SHA is not defined as expected. There is a syntax issue with defining the variable you should use = instead of - after SHA, so it should be like this:
- SHA=$(git rev-parse HEAD)
The second issue which is related to kubectl you need to install it using the following command according to the docs:
gcloud components install kubectl
Update:
After testing this file on Travis-CI I was able to figure out the issue. You should use before_install instead of before-install so in your case the before installation steps never get executed.
# travis.yml
---
env:
global:
- CLOUDSDK_CORE_DISABLE_PROMPTS=1
before_install:
- curl https://sdk.cloud.google.com | bash > /dev/null
- source $HOME/google-cloud-sdk/path.bash.inc
- gcloud components install kubectl
script: kubectl version
And the final part of the build result:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.7", GitCommit:"65ecaf0671341311ce6aea0edab46ee69f65d59e", GitTreeState:"clean", BuildDate:"2019-01-24T19:32:00Z", GoVersion:"go1.10.7", Compiler:"gc", Platform:"linux/amd64"}
I am building my project on CircleCI and I have a build job that looks like this:
build:
<<: *defaults
steps:
- checkout
- setup_remote_docker
- run:
name: Install pip
command: curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py && sudo python get-pip.py
- run:
name: Install AWS CLI
command: curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip" && unzip awscli-bundle.zip && sudo ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws
- run:
name: Login to Docker Registry
command: aws ecr get-login --no-include-email --region us-east-1 | sh
- run:
name: Install Dep
command: curl https://raw.githubusercontent.com/golang/dep/master/install.sh | sh
- run:
name: Save Version Number
command: echo "export VERSION_NUM=${CIRCLE_TAG}.${CIRCLE_BUILD_NUM}" > deployment/dev/.env
- run:
name: Build App
command: source deployment/dev/.env && docker-compose -f deployment/dev/docker-compose.yml build
- run:
name: Test App
command: |
git config --global url."https://${GITHUB_PERSONAL_ACCESS_TOKEN} :x-oauth-basic#github.com/".insteadOf "https://github.com/"
dep ensure
go test -v ./...
- run:
name: Push Image
command: |
if [[ "${CIRCLE_TAG}" =~ ^v[0.9]+(\.[0-9]+)*-[a-z]*$ ]]; then
source deployment/dev/.env
docker-compose -f deployment/dev/docker-compose.yml push
else
echo 'No tag, not deploying'
fi
- persist_to_workspace:
root: .
paths:
- deployment/*
- tools/*
When I push a change to a branch, the build fails every time with Couldn't connect to Docker daemon at ... - is it running? when it reaches the Build App step of the build job.
Please help me figure out why branch builds are failing but tag builds are not.
I suspect you are hitting this docker-compose bug: https://github.com/docker/compose/issues/6050
The bug reports a misleading error (the one you're getting) when an image name in the docker-compose file is invalid.
If you use an environment variable for the image name or image tag, and that variable is set from a branch name, then it would fail on some branches, but not others.
The problem was occurring on the Save Version Number step. Sometimes that version would be .${CIRCLE_BUILD_NUM} since no tag was passed. Docker dislikes these tags starting with ., so I added a conditional check to see if CIRCLE_TAG was empty, and if it was, use some default version: v0.1.0-build.
I'm trying to setup codecov as code coverage tool in my repository. I referred to this link to pass reports through docker container -
Link - https://github.com/codecov/support/wiki/Testing-with-Docker
But travis ci fails and gives this error -
docker: Error parsing reference: "..." is not a valid repository/tag.
Here is my travis.yml
sudo: required
dist: trusty
language: node_js
node_js:
- 6
before_install:
- export CHROME_BIN=chromium-browser
- export DISPLAY=:99.0
- sh -e /etc/init.d/xvfb start
- docker run -v "$PWD/shared:/shared" ...
before_script:
- ng build
script:
- ng test --watch=false
- ng lint
- >
docker run -ti -v $(pwd):/app --workdir=/app coala/base coala --version
after_success:
- bash ./deploy.sh
- bash <(curl -s https://codecov.io/bash)
- mv -r coverage/ shared
cache:
bundler: true
directories:
- node_modules
- .coala-cache
services: docker
branches:
only:
- angular
How should I solve this? Thanks!
I assume you refer to Codecov Outside Docker. The current error message already tells you that the three dots ... need to be replaced with a real Docker repository name, e.g. node:6-alpine.
What you're still missing is the part of running the tests (including reports) inside the Docker container, so that you can mv the test reports to the shared folder. You could achieve that by adding a custom Dockerfile based on node, similar to the one below. I chose a more or less full base image including Chrome and other tools to make your use case work:
FROM markadams/chromium-xvfb-js:7
WORKDIR /proj
CMD npm install && \
node_modules/.bin/ng build && \
node_modules/.bin/ng test --watch=false && \
node_modules/.bin/ng lint && \
mkdir -p shared && \
mv coverage.txt shared
That custom image needs to be built and then run like this (assuming the Dockerfile to be in your project root directory):
docker build -t ci-build .
docker run --rm -v "$(pwd):/proj" ci-build
I suggest to change the .travis.yml like follows:
sudo: required
dist: trusty
language: node_js
node_js:
- 6
before_install:
- docker build -t ci-build .
script:
- >
docker run --rm -v $(pwd):/proj ci-build
- >
docker run -ti -v $(pwd):/app --workdir=/app coala/base coala --version
after_success:
- bash ./deploy.sh
- bash <(curl -s https://codecov.io/bash)
cache:
bundler: true
directories:
- node_modules
- .coala-cache
services: docker
branches:
only:
- angular
Another note: the coala/base image works similarly.