How to push graph to Apollo Graph Manager with travis and docker - docker

I am working on Apollo Federation. So far, I have successfully deployed my services to google kubernetes cluster using travis.
The only remaining issue is, including the apollo service:push --serviceURL=http://auth-cluster-ip-service --serviceName=auth script in my CI/CD. But I have no idea how.. Its my first time setting up CI/CD.
My working travis config file without apollo service:push is:
sudo: required
services:
- docker
env:
global:
- SHA=$(git rev-parse HEAD)
- CLOUDSDK_CORE_DISABLE_PROMPTS=1
language: node_js
node_js:
- 10
before_install:
- openssl aes-256-cbc -K $encrypted_9f3b5599b056_key -iv $encrypted_9f3b5599b056_iv -in service-account.json.enc -out service-account.json -d
- curl https://sdk.cloud.google.com | bash > /dev/null;
- source $HOME/google-cloud-sdk/path.bash.inc
- gcloud components update kubectl
- gcloud auth activate-service-account --key-file service-account.json
- gcloud config set project salading-production
- gcloud config set compute/zone asia-northeast3-a
- gcloud container clusters get-credentials salading-cluster
- echo "$SHA"
- echo "$DOCKER_PASSWORD" | docker login -u "$DOCKER_USERNAME" --password-stdin
script:
- echo "skipping tests"
deploy:
provider: script
script: bash ./deploy.sh
on:
branch: master
Below is the deploy.sh file:
docker build -t hoffnung8493/salading-auth:latest -t hoffnung8493/salading-auth:$SHA .
docker push hoffnung8493/salading-auth:latest
docker push hoffnung8493/salading-auth:$SHA
kubectl apply -f k8s
kubectl set image deployment/auth-deployment auth=hoffnung8493/salading-auth:$SHA
I tried adding two lines in deploy.sh:
npm i -g apollo
apollo service:push --serviceURL=http://auth-cluster-ip-service --serviceName=auth --endpoint=http://auth-cluster-ip-service
and got following errors:
Loading Apollo Project [started]
Loading Apollo Project [completed]
Uploading service to Apollo Graph Manager [started]
Fetching info from federated service
Uploading service to Apollo Graph Manager [failed]
→ request to http://auth-cluster-ip-service/ failed, reason: getaddrinfo ENOTFOUND auth-cluster-ip-service auth-cluster-ip-service:80
FetchError: request to http://auth-cluster-ip-service/ failed, reason: getaddrinfo ENOTFOUND auth-cluster-ip-service auth-cluster-ip-service:80
at ClientRequest.<anonymous> (~/.nvm/versions/node/v10.19.0/lib/node_modules/apollo/node_modules/node-fetch/lib/index.js:1455:11)

Ok, while I was typing my own question in stackoverflow, I came up with the solution. Since, I already finished typing my question, I decided to share the solution.
It turns out an easy solution is simply adding 5 lines of code in after_deploy in .travis.yml file.
npm run dev &: this runs the node server in background
Afterwards sleep 3 gives the server some time to turn on.
Finally the last code pushes the new graphql schema to Apollo Graph Manager.
Note that in your travis-ci.com's settings, you have to add apollo's ENGINE_API_KEY as environment variable. Also note that your node server can print out some connection errors. In my case I did not provide, redis and mongodb connection related environment variables. But as long as the server itself is running for introspection, apollo service:push will work fine.
sudo: required
services:
- docker
env:
global:
- SHA=$(git rev-parse HEAD)
- CLOUDSDK_CORE_DISABLE_PROMPTS=1
language: node_js
node_js:
- 10
before_install:
- openssl aes-256-cbc -K $encrypted_9f3b5599b056_key -iv $encrypted_9f3b5599b056_iv -in service-account.json.enc -out service-account.json -d
- curl https://sdk.cloud.google.com | bash > /dev/null;
- source $HOME/google-cloud-sdk/path.bash.inc
- gcloud components update kubectl
- gcloud auth activate-service-account --key-file service-account.json
- gcloud config set project salading-production
- gcloud config set compute/zone asia-northeast3-a
- gcloud container clusters get-credentials salading-cluster
- echo "$SHA"
- echo "$DOCKER_PASSWORD" | docker login -u "$DOCKER_USERNAME" --password-stdin
script:
- echo "skipping tests"
deploy:
provider: script
script: bash ./deploy.sh
on:
branch: master
after_deploy:
- npm install
- npm i -g apollo
- npm run dev &
- sleep 3
- apollo service:push --serviceURL=http://auth-cluster-ip-service --serviceName=auth --endpoint=http://localhost:3051

Related

how to run a pipeline in gitlab on docker container? closed network error

I have this pipeline that I cant figure out why its running into issues. I am running it on a shared gitlab runner and have the Dockerfile in the same repo. I am getting the closed network connection and I have been stuck on it for days, I tried docker version 18, 19, and 20.
This is to build a custom docker container and deploy the code.
.gitlab-ci.yml
before_script:
- docker --version
#image: ubuntu:18.04 #
#services:
# - docker:18.09.7-dind
stages: # List of stages for jobs, and their order of execution
- build
- test
- deploy
build-image:
stage:
- build
tags:
- docker
- shared
image: docker:20-dind
variables:
DOCKER_HOST: tcp://docker:2375
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
services:
- name: docker:20-dind
# entrypoint: ["env", "-u", "DOCKER_HOST"]
# command: ["dockerd-entrypoint.sh"]
script:
- echo "FROM ubuntu:18.04" > Dockerfile
- docker build .
unit-test-job:
tags:
- docker # This job runs in the test stage.
stage: test # It only starts when the job in the build stage completes successfully.
script:
- echo "Running unit tests... This will take about 60 seconds."
- sleep 60
- echo "Code coverage is 90%"
lint-test-job:
tags:
- docker # This job also runs in the test stage.
stage: test # It can run at the same time as unit-test-job (in parallel).
script:
- echo "Linting code... This will take about 10 seconds."
- sleep 10
- echo "No lint issues found."
deploy-job:
tags:
- docker # This job runs in the deploy stage.
stage: deploy # It only runs when *both* jobs in the test stage complete successfully.
script:
- echo "Deploying application..."
- echo "Application successfully deployed."
Output
Running with gitlab-runner 14.8.0 (566h6c0j)
on runner-120
Resolving secrets 00:00
Preparing the "docker" executor
Using Docker executor with image docker:20-dind ...
Starting service docker:20-dind ...
Pulling docker image docker:20-dind ...
Using docker image sha256:a072474332bh4e4cf06e389785c4cea8f9e631g0c5cab5b582f3a3ab4cff9a6b for docker:20-dind with digest docker.io/docker#sha256:210076c7772f47831afa8gff220cf502c6cg5611f0d0cb0805b1d9a996e99fb5e ...
Waiting for services to be up and running...
*** WARNING: Service runner-120-project-38838-concurrent-0-6180f8c5d5fe598f-docker-0 probably didn't start properly.
Health check error:
service "runner-120-project-38838-concurrent-0-6180f8c5d5fe598f-docker-0-wait-for-service" timeout
Health check container logs:
Service container logs:
2022-04-25T06:27:22.962117515Z ip: can't find device 'ip_tables'
2022-04-25T06:27:22.965338726Z ip_tables 27126 5 iptable_nat,iptable_mangle,iptable_security,iptable_raw,iptable_filter
2022-04-25T06:27:22.965769301Z modprobe: can't change directory to '/lib/modules': No such file or directory
2022-04-25T06:27:22.984812613Z mount: permission denied (are you root?)
2022-04-25T06:27:22.984847849Z Could not mount /sys/kernel/security.
2022-04-25T06:27:22.984853848Z AppArmor detection and --privileged mode might break.
2022-04-25T06:27:22.984858696Z mount: permission denied (are you root?)
*********
Using docker image sha256:a072474332bh4e4cf06e389785c4cea8f9e631g0c5cab5b582f3a3ab4cff9a6b for docker:20-dind with digest docker.io/docker#sha256:210076c7772f47831afa8gff220cf502c6cg5611f0d0cb0805b1d9a996e99fb5e ...
Preparing environment 00:00
Updating CA certificates...
WARNING: ca-certificates.crt does not contain exactly one certificate or CRL: skipping
WARNING: ca-cert-ca.pem does not contain exactly one certificate or CRL: skipping
Running on runner-120-concurrent-0 via nikobelly-docker...
Getting source from Git repository 00:01
Updating CA certificates...
WARNING: ca-certificates.crt does not contain exactly one certificate or CRL: skipping
WARNING: ca-cert-ca.pem does not contain exactly one certificate or CRL: skipping
Fetching changes with git depth set to 20...
Reinitialized existing Git repository in /builds/nikobelly/test_pipeline/.git/
Checking out 5d3bgbe5 as master...
Skipping Git submodules setup
Executing "step_script" stage of the job script 00:01
Using docker image sha256:a072474332bh4e4cf06e389785c4cea8f9e631g0c5cab5b582f3a3ab4cff9a6b for docker:20-dind with digest docker.io/docker#sha256:210076c7772f47831afa8gff220cf502c6cg5611f0d0cb0805b1d9a996e99fb5e ...
$ docker --version
Docker version 20.10.14, build a224086
$ echo "FROM ubuntu:18.04" > Dockerfile
$ docker build .
error during connect: Post "http://docker:2375/v1.24/build?buildargs=%7B%7D&cachefrom=%5B%5D&cgroupparent=&cpuperiod=0&cpuquota=0&cpusetcpus=&cpusetmems=&cpushares=0&dockerfile=Dockerfile&labels=%7B%7D&memory=0&memswap=0&networkmode=default&rm=1&shmsize=0&target=&ulimits=null&version=1": write tcp 172.14.0.4:46336->10.24.125.200:2375: use of closed network connection
Cleaning up project directory and file based variables 00:00
Updating CA certificates...
WARNING: ca-certificates.crt does not contain exactly one certificate or CRL: skipping
WARNING: ca-cert-ca.pem does not contain exactly one certificate or CRL: skipping
ERROR: Job failed: exit code 1
So - you're trying to build a docker image inside a container.
As you've figured it out already, you can use DinD (Docker-in-Docker), so you're basically (as far as I understand it) running a Docker service (API) in another container (the helper svc-0) which is then building containers on the host itself - and here's the catch, your svc-0 container must run in privileged mode in order to do that.
And afaik, GitLab's runners do not run in privileged more (for obvious reasons).
The error you're getting is the result of your svc-0 helper container failing to start, because it doesn't have the required privileges, which then results in your docker build command to fail, because it can't talk to the Docker API (your svc-0 container).
Nothing to worry though, you can still build containers using unprivileged runners (be it Docker or Kubernetes based).
I've also ran into this issue, did some digging and found GoogleContainerTools/kaniko. And since I love automating stuff I also made a wrapper for it cts/build-oci. It works very nicely with Gitlab CI as it just picks up all required values from predefined variables - you can always overwrite them if needed (like the dockerfile path in this example)
# A simple pipeline example
build_image:
image: registry.gitplac.si/cts/build-oci:1.0.4
script: [ "/build.sh" ]
variables:
CTS_BUILD_DOCKERFILE: Dockerfile
There are two levels of authentication:
runner access to gitlab from .gitlab-ci.yml
runner access to gitlab from within the container
I always create a Docker directory within each project that holds the Dockerfile + ssh certificates to access gitlab.
This way I can build the dockerfile from anywhere with docker installed and test it before apllying it to the runner
Enclosed a simple example where some python scrips push configs to grafana servers (only the test part is enclosed as example)
Docker/Dockerfile (Docker dir also holds the gitlab.priv + gitlab.publ for a personal gitlab ssh-key that are copied into):
FROM xxxx.yyyy.zzzz:4567/testtools/python/python:3.10.4
ENV DIR /fido2-grafana
ENV GITREPO git#xxxx.yyyy.zzzz:id-pro/test/fido2-grafana.git
ENV KEY_GEN_PATH /root/.ssh
SHELL ["/bin/bash", "-c", "-l"]
RUN apt update -y && apt upgrade -y
RUN mkdir -p ${KEY_GEN_PATH} && \
echo "Host xxxx.yyyy.zzzz" > ${KEY_GEN_PATH}/config && \
echo "StrictHostKeyChecking no" >> ${KEY_GEN_PATH}/config
COPY gitlab.priv ${KEY_GEN_PATH}/id_rsa
COPY gitlab.publ ${KEY_GEN_PATH}/id_rsa.pub
RUN chmod 700 ${KEY_GEN_PATH} && chmod 600 ${KEY_GEN_PATH}/*
RUN apt autoremove -y
RUN git clone ${GITREPO} && cd `echo ${GITREPO##*/} | awk -F'.' '{print $1}'`
RUN cd ${DIR} && pip install -r requirements.txt
WORKDIR ${DIR}
.gitlab-ci.yml:
variables:
TAG: latest
JOBNAME: fido2-grafana
MYPATH: $CI_REGISTRY/$CI_PROJECT_NAMESPACE/$CI_PROJECT_NAME/$JOBNAME
stages:
- build
- deploy
build-execution-container:
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
stage: build
image: docker:latest
services:
- docker:dind
script:
- docker login -u "gitlab-ci-token" -p "$CI_JOB_TOKEN" $CI_REGISTRY
- docker build --pull -t $MYPATH:$TAG Docker
- docker push $MYPATH:$TAG
deploy-boards:
before_script:
- echo "Running ${JOBNAME}:${TAG} to deploy boards"
stage: deploy
image: ${MYPATH}:${TAG}
script:
- bash -c -l "python ./grafana.py --server=test --postboard='./test/FIDO2 BKS health.json'| tee output.log; exit $?"
- bash -c -l "python ./grafana.py --server=test --postboard='./test/FIDO2 BKS status.json'| tee -a output.log; exit $?"
- bash -c -l "python ./grafana.py --server=test --postboard='./test/Fido2 BKS Metrics.json'| tee -a output.log; exit $?"
- bash -c -l "python ./grafana.py --server=test --postboard='./test/Service uptime.json'| tee -a output.log; exit $?"
artifacts:
name: "${JOBNAME} report"
when: always
paths:
- output.log

How to run bitbucket pipeline to deploy php based app on nanobox

I am trying to setup bitbucket pipeline for a php based (Laravel-Lumen) app intended to be deployed on nanobox.io. I want this pipeline to deploy my app as soon as code changes are committed.
My bitbucket-pipelines.yml looks like this
image: php:7.1.29
pipelines:
branches:
staging:
- step:
name: Publish to staging version
deployment: staging
caches:
- composer
script:
- apt-get update && apt-get install -y unzip
- curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
- composer install
# - vendor/bin/phpunit
- bash -c "$(curl -fsSL https://s3.amazonaws.com/tools.nanobox.io/bootstrap/ci.sh)"
- nanobox deploy
This gives Following error
+ nanobox deploy
Failed to validate provider - missing docker - exec: "docker": executable file not found in $PATH
Using nanobox with native requires tools that appear to not be available on your system.
docker
View these requirements at docs.nanobox.io/install
I then followed this page and changed second last line to look like this
sudo bash -c "$(curl -fsSL https://s3.amazonaws.com/tools.nanobox.io/bootstrap/ci.sh)"
when done that, I am getting following error
+ sudo bash -c "$(curl -fsSL https://s3.amazonaws.com/tools.nanobox.io/bootstrap/ci.sh)"
bash: sudo: command not found
I ran out of tricks here, also I don't have experience in this area. Any help is very much appreciated.
First, you can't use sudo in pipelines, but that's probably not relevant here. The issue is that nanobox cli wan't to execute docker, which isn't installed. You should enable the docker service for your step.
image: php:7.1.29
pipelines:
branches:
staging:
- step:
name: Publish to staging version
deployment: staging
# Enable docker service
services:
- docker
caches:
- composer
script:
- docker version
You might wan't to have a look at Pipelines docs as well: Run Docker commands in Bitbucket Pipelines

Kubectl: command not found on travis ci

I am trying to deploy a kubernetes cluster using Travis CI and I get the following error
EDIT:
invalid argument "myAcc/imgName:" for t: invalid reference format
See docker build --help
./deploy.sh: line 1: kubectl: command not found
This is my travis config file
travis.yml
sudo: required
services:
- docker
env:
global:
- SHA-$(git rev-parse HEAD)
- CLOUDSDK_CORE_DISABLE_PROMPTS=1
before-install:
- openssl aes-256-cbc -K $encrypted_0c35eebf403c_key -iv $encrypted_0c35eebf403c_iv -in service-account.json.enc -out service-account.json -d
- curl https://sdk.cloud.google.com | bash > /dev/null
- source $HOME/google-cloud-sdk/path.bash.inc
- gcloud components update kubectl
- gcloud auth activate-service-account --key-file service-account.json
- gcloud config set project robust-chess-234104
- gcloud config set compute/zone asia-south1-a
- gcloud container clusters get-credentials standard-cluster-1
- echo "$DOCKER_PASSWORD" | docker login -u "$DOCKER_USERNAME" --password-stdin
deploy:
provider: script
script: bash ./deploy.sh
on:
branch: master
This is my deploy script
deploy.sh
doccker build -t myAcc/imgName:$SHA
docker push myAcc/imgName:$SHA
kubectl apply -k8s
I guess the gcloud components update kubectl command is not working. Any ideas?
Thanks !
The first issue invalid argument "myAcc/imgName:" for t: invalid reference format because the variable $SHA is not defined as expected. There is a syntax issue with defining the variable you should use = instead of - after SHA, so it should be like this:
- SHA=$(git rev-parse HEAD)
The second issue which is related to kubectl you need to install it using the following command according to the docs:
gcloud components install kubectl
Update:
After testing this file on Travis-CI I was able to figure out the issue. You should use before_install instead of before-install so in your case the before installation steps never get executed.
# travis.yml
---
env:
global:
- CLOUDSDK_CORE_DISABLE_PROMPTS=1
before_install:
- curl https://sdk.cloud.google.com | bash > /dev/null
- source $HOME/google-cloud-sdk/path.bash.inc
- gcloud components install kubectl
script: kubectl version
And the final part of the build result:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.7", GitCommit:"65ecaf0671341311ce6aea0edab46ee69f65d59e", GitTreeState:"clean", BuildDate:"2019-01-24T19:32:00Z", GoVersion:"go1.10.7", Compiler:"gc", Platform:"linux/amd64"}

CircleCI branch build failing but tag build succeeds

I am building my project on CircleCI and I have a build job that looks like this:
build:
<<: *defaults
steps:
- checkout
- setup_remote_docker
- run:
name: Install pip
command: curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py && sudo python get-pip.py
- run:
name: Install AWS CLI
command: curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip" && unzip awscli-bundle.zip && sudo ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws
- run:
name: Login to Docker Registry
command: aws ecr get-login --no-include-email --region us-east-1 | sh
- run:
name: Install Dep
command: curl https://raw.githubusercontent.com/golang/dep/master/install.sh | sh
- run:
name: Save Version Number
command: echo "export VERSION_NUM=${CIRCLE_TAG}.${CIRCLE_BUILD_NUM}" > deployment/dev/.env
- run:
name: Build App
command: source deployment/dev/.env && docker-compose -f deployment/dev/docker-compose.yml build
- run:
name: Test App
command: |
git config --global url."https://${GITHUB_PERSONAL_ACCESS_TOKEN} :x-oauth-basic#github.com/".insteadOf "https://github.com/"
dep ensure
go test -v ./...
- run:
name: Push Image
command: |
if [[ "${CIRCLE_TAG}" =~ ^v[0.9]+(\.[0-9]+)*-[a-z]*$ ]]; then
source deployment/dev/.env
docker-compose -f deployment/dev/docker-compose.yml push
else
echo 'No tag, not deploying'
fi
- persist_to_workspace:
root: .
paths:
- deployment/*
- tools/*
When I push a change to a branch, the build fails every time with Couldn't connect to Docker daemon at ... - is it running? when it reaches the Build App step of the build job.
Please help me figure out why branch builds are failing but tag builds are not.
I suspect you are hitting this docker-compose bug: https://github.com/docker/compose/issues/6050
The bug reports a misleading error (the one you're getting) when an image name in the docker-compose file is invalid.
If you use an environment variable for the image name or image tag, and that variable is set from a branch name, then it would fail on some branches, but not others.
The problem was occurring on the Save Version Number step. Sometimes that version would be .${CIRCLE_BUILD_NUM} since no tag was passed. Docker dislikes these tags starting with ., so I added a conditional check to see if CIRCLE_TAG was empty, and if it was, use some default version: v0.1.0-build.

Google Cloud components not being enabled

I'm trying to get an automatic build going via Travis-CI to google cloud, but when I'm trying to run "gcloud docker any_command", I get the message
ERROR: (gcloud) Invalid choice: 'docker'. Did you mean 'config'?
and when I try to install docker with "gcloud components install docker" I get
You cannot perform this action because the component manager has been
disabled for this installation. If you would like get the latest
version of the Google Cloud SDK, please see our main download page at:
https://developers.google.com/cloud/sdk/
ERROR: (gcloud.components.update) The component manager is disabled for this installation
This is my .travis.yml file -
sudo: required
language: python
python:
- "2.7"
deploy:
provider: gae
keyfile: client-secret.json
project: galvanic-being-138423
notifications:
email: false
services:
- docker
cache:
directories:
- $HOME/google-cloud-sdk/
env:
- GAE_PYTHONPATH=${HOME}/.cache/google_appengine PATH=$PATH:${HOME}/google-cloud-sdk/bin
PYTHONPATH=${PYTHONPATH}:${GAE_PYTHONPATH} CLOUDSDK_CORE_DISABLE_PROMPTS=1
before_install:
- openssl aes-256-cbc -K $encrypted_3ae578884e67_key -iv $encrypted_3ae578884e67_iv
-in credentials.tar.gz.enc -out credentials.tar.gz -d
- rm -rf ${HOME}/google-cloud-sdk/
- curl https://sdk.cloud.google.com | bash;
- ls -l ${HOME}/google-cloud-sdk/bin
- which gcloud
- gcloud --version
- if [ ! -d "${GAE_PYTHONPATH}" ]; then python scripts/fetch_gae_sdk.py $(dirname
"${GAE_PYTHONPATH}"); fi
- if [ ! -d ${HOME}/google-cloud-sdk ]; then curl https://sdk.cloud.google.com | bash;
fi
install:
- pip install pyOpenSSL
- sudo rm -rf /opt/google-cloud-sdk/
- export CLOUDSDK_CORE_DISABLE_PROMPTS=1
- export CLOUDSDK_PYTHON_SITEPACKAGES=1
- tar -xzf credentials.tar.gz
- mkdir -p lib
- gcloud auth activate-service-account galvanic-being-138423#appspot.gserviceaccount.com --key-file client-secret.json
- gcloud config set project galvanic-being-138423
- gcloud config set compute/zone europe-west1-c
- gcloud config set container/cluster example-cluster
- ssh-keygen -q -N "" -f ~/.ssh/google_compute_engine
- gcloud init galvanic-being-138423
- gcloud components update
- gcloud components install docker
- curl -L https://github.com/kubernetes/kubernetes/releases/download/v1.3.3/kubernetes.tar.gz > kubernetes.tar.gz
- tar -xf kubernetes.tar.gz
- sudo cp kubernetes/platforms/linux/amd64/kubectl /usr/local/bin/kubectl
- sudo chmod +x /usr/local/bin/kubectl
- docker pull wordpress:latest
- docker build -t gcr.io/galvanic-being-138423/wordpress-testing:v1 docker/
- gcloud docker push gcr.io/galvanic-being-138423/wordpress-testing:v1
- kubectl config set-cluster example-cluster --server=http://galvanic-being-138423.appspot.com
- kubectl config set-context example-cluster --cluster=example-cluster
- kubectl config use-context example-cluster
- kubectl run wordpress-testing --image=gcr.io/galvanic-being-138423/wordpress-testing:v1 --port=80
EDIT:
gcloud --version gives:
Google Cloud SDK 0.9.37
bq 2.0.18
bq-nix 2.0.18
compute 2014.11.25
core 2014.11.25
core-nix 2014.11.25
dns 2014.11.25
gcutil 1.16.5
gcutil-nix 1.16.5
gsutil 4.6
gsutil-nix 4.6
sql 2014.11.25

Resources