Kubernetes - How to setup parallels clusters (Prod, Dev) sharing the same repositories/pipeline - docker

I am currently facing a situation where I need to deploy a small cluster (only 4 pods for now) which will be containing 4 different microservices. This cluster has to be duplicated so I can have one PRODUCTION cluster and one DEVELOPMENT cluster.
Even if it's not hard from my point of view (Creating a cluster and then uploading docker images to pods with parameters in order to use the right resources connection strings), I am stuck at the CD/CI part..
From a CloudBuild trigger, how to push the Docker image to the right "cluster's pod", I have absolutely no idea AT ALL how to achieve it...
There is my cloudbuild.yaml
steps:
#step 1 - Getting the previous (current) image
- name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args: [
'-c',
'docker pull gcr.io/{PROJECT_ID}/{SERVICE_NAME}:latest || exit 0'
]
#step 2 - Build the image and push it to gcr.io
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'-t',
'gcr.io/{PROJECT_ID}/{SERVICE_NAME}',
'-t',
'gcr.io/{PROJECT_ID}/{SERVICE_NAME}:latest',
'.'
]
#step 3 - Deploy our container to our cluster
- name: 'gcr.io/cloud-builders/kubectl'
args: ['apply', '-f', 'service.yaml', '--force']
env:
- 'CLOUDSDK_COMPUTE_ZONE={CLUSTER_ZONE}'
- 'CLOUDSDK_CONTAINER_CLUSTER={CLUSTER_NAME}'
#step 4 - Set the image
- name: 'gcr.io/cloud-builders/kubectl'
args: [
'set',
'image',
'deployment',
'{SERVICE_NAME}',
'{SERVICE_NAME}=gcr.io/{PROJECT_ID}/{SERVICE_NAME}'
]
env:
- 'CLOUDSDK_COMPUTE_ZONE={CLUSTER_ZONE}'
- 'CLOUDSDK_CONTAINER_CLUSTER={CLUSTER_NAME}'
# push images to Google Container Registry with tags
images: [
'gcr.io/{PROJECT_ID}/{SERVICE_NAME}',
'gcr.io/{PROJECT_ID}/{SERVICE_NAME}:latest'
]
Can anyone help me out? I don't really know in which direction to go to..

Did you know helm chart? It is designed for different environment deployment.
With different values.yaml file, you can quickly deploy to different environment with same source code base.
For example, you can name the values.yaml file with environment.
values-dev.yaml
values-sit.yaml
values-prod.yaml
the only differences are some varialbes, such as environment (dev/sit/prod), and namespaces.
so when you run the deployment, it will be:
env=${ENVIRONMENT}
helm install -f values-${env}.yaml myredis ./redis

So my question is:
How are you triggering these builds? Manually? GitHub Trigger? HTTP Trigger using the REST API?
so you're almost there for the building/pushing part, you would need to use substitution variables https://cloud.google.com/cloud-build/docs/configuring-builds/substitute-variable-values
If you would be triggering the builds manually, you would edit the build trigger and change the sub variable for what you want it to be.
GitHub Trigger -- this is a little more complex as you might want to do releases or branches.
HTTP Trigger, same as manual, in your request you change the sub variable.
So here's part of one of our repo build files, as you will see there are different sub. variables we use, sometimes we want to build the image AND deploy to cluster, other times we just want to build or deploy.
steps:
# pull docker image
- name: 'gcr.io/cloud-builders/docker'
id: pull-docker-image
entrypoint: 'bash'
args:
- '-c'
- |
docker pull $${_TAG_DOCKER_IMAGE} || exit 0
# build docker image
- name: 'gcr.io/cloud-builders/docker'
id: build-docker-image
entrypoint: 'bash'
args:
- '-c'
- |
if [[ "${_BUILD_IMAGE}" == "true" ]]; then
docker build -t ${_DOCKER_IMAGE_TAG} --cache-from $${_TAG_DOCKER_IMAGE} .;
else
echo "skipping ... BUILD_IMAGE=${_BUILD_IMAGE}";
fi
# push docker image
- name: 'gcr.io/cloud-builders/docker'
id: push-docker-image
entrypoint: 'bash'
args:
- '-c'
- |
if [[ "${_BUILD_IMAGE}" == "true" ]]; then
docker push ${_DOCKER_IMAGE_TAG};
else
echo "skipping ... BUILD_IMAGE=${_BUILD_IMAGE}";
fi
# tag docker image
- name: 'gcr.io/cloud-builders/gcloud'
id: tag-docker-image
entrypoint: 'bash'
args:
- '-c'
- |
if [[ "${_BUILD_IMAGE}" == "true" ]]; then
gcloud container images add-tag ${_DOCKER_IMAGE_TAG} $${_TAG_DOCKER_IMAGE} -q;
else
echo "skipping ... BUILD_IMAGE=${_BUILD_IMAGE}";
fi
# update service image on environment
- name: 'gcr.io/cloud-builders/kubectl'
id: update service deployment image
entrypoint: 'bash'
args:
- '-c'
- |
if [[ "${_UPDATE_CLUSTER}" == "true" ]]; then
/builder/kubectl.bash set image deployment $REPO_NAME master=${_DOCKER_IMAGE_TAG} --namespace=${_DEFAULT_NAMESPACE};
else
echo "skipping ... UPDATE_CLUSTER=${_UPDATE_CLUSTER}";
fi
env:
- 'CLOUDSDK_COMPUTE_ZONE=${_CLOUDSDK_COMPUTE_ZONE}'
- 'CLOUDSDK_CONTAINER_CLUSTER=${_CLOUDSDK_CONTAINER_CLUSTER}'
# subs are needed because of our different ENVs
# _DOCKER_IMAGE_TAG = ['gcr.io/$PROJECT_ID/$REPO_NAME:gcb-${_COMPANY_ENV}-$SHORT_SHA', 'other']
# _COMPANY_ENV = ['dev', 'staging', 'prod']
# _DEFAULT_NAMESPACE = ['default'] or ['custom1', 'custom2']
# _CLOUDSDK_CONTAINER_CLUSTER = ['dev', 'prod']
# _CLOUDSDK_COMPUTE_ZONE = ['us-central1-a']
# _BUILD_IMAGE = ['true', 'false']
# _UPDATE_CLUSTER = ['true', 'false']
substitutions:
_DOCKER_IMAGE_TAG: $DOCKER_IMAGE_TAG
_COMPANY_ENV: dev
_DEFAULT_NAMESPACE: default
_CLOUDSDK_CONTAINER_CLUSTER: dev
_CLOUDSDK_COMPUTE_ZONE: us-central1-a
_BUILD_IMAGE: 'true'
_UPDATE_CLUSTER: 'true'
options:
substitution_option: 'ALLOW_LOOSE'
env:
- _TAG_DOCKER_IMAGE=gcr.io/$PROJECT_ID/$REPO_NAME:${_COMPANY_ENV}-latest
- DOCKER_IMAGE_TAG=gcr.io/$PROJECT_ID/$REPO_NAME:gcb-${_COMPANY_ENV}-$SHORT_SHA
tags:
- '${_COMPANY_ENV}'
- 'build-${_BUILD_IMAGE}'
- 'update-${_UPDATE_CLUSTER}'
we have two workflows --
github trigger builds and deploys under the 'dev' environment.
we trigger via REST API https://cloud.google.com/cloud-build/docs/api/reference/rest/v1/projects.builds/create (we replace the variables via the request.json) -- this method also works using the gcloud builds --substitutions CLI.
Hope that answers your question!

The short answer for this is to apply GitOps deployment practices in your workflow.
All your Kubernetes YAMLs or Helmcharts are in a single git repository which is used by a GitOps operator.
In your CI pipeline, you only have to build and push docker images.
The GitOps operator intelligently fetches images versions and make changes and a commit to the target Kubernetes YAML in the repository.
Every change in the GitOps repository is applied to the cluster.
See https://fluxcd.io/

Related

Cloud Builds Failulre , unable to find logs to see what is going on

i am kicking off a dataflow flex template using a cloud build. In my cloud build file i am attempting to do 3 things
build an image
publish it
run a flex template job using that image
this is my yaml file
substitutions:
_IMAGE: my_logic:latest4
_JOB_NAME: 'pipelinerunner'
_TEMP_LOCATION: ''
_REGION: us-central1
_FMPKEY: ''
_PYTHON_VERSION: '3.8'
# checkout this link https://github.com/davidcavazos/python-docs-samples/blob/master/dataflow/gpu-workers/cloudbuild.yaml
steps:
- name: gcr.io/cloud-builders/docker
args:
[ 'build'
, '--build-arg=python_version=$_PYTHON_VERSION'
, '--tag=gcr.io/$PROJECT_ID/$_IMAGE'
, '.'
]
# Push the image to Container Registry.
- name: gcr.io/cloud-builders/docker2
args: [ 'push', 'gcr.io/$PROJECT_ID/$_IMAGE' ]
- name: gcr.io/$PROJECT_ID/$_IMAGE
entrypoint: python
args:
- /dataflow/template/main.py
- --runner=DataflowRunner
- --project=$PROJECT_ID
- --region=$_REGION
- --job_name=$_JOB_NAME
- --temp_location=$_TEMP_LOCATION
- --sdk_container_image=gcr.io/$PROJECT_ID/$_IMAGE
- --disk_size_gb=50
- --year=2018
- --quarter=QTR1
- --fmpkey=$_FMPKEY
- --setup_file=/dataflow/template/setup.py
options:
logging: CLOUD_LOGGING_ONLY
# Use the Compute Engine default service account to launch the job.
serviceAccount: projects/$PROJECT_ID/serviceAccounts/$PROJECT_NUMBER-compute#developer.gserviceaccount.com
And this is the command i am launching
gcloud beta builds submit \
--config run.yaml \
--substitutions _REGION=$REGION \
--substitutions _FMPKEY=$FMPKEY \
--no-source
The error message i am getting is this
Logs are available at [https://console.cloud.google.com/cloud-build/builds/0f5953cc-7802-4e53-b7c4-7e79c6f0d0c7?project=111111111].
ERROR: (gcloud.beta.builds.submit) build 0f5953cc-7802-4e53-b7c4-7e79c6f0d0c7 completed with status "FAILURE
but i cannot access the logs from the URL mentioned above
I cannot see the logs, so i am unable to see what is wrong, but i stongly suspect somethign in my run.yaml is not quite right
Note: before this, i was building the image myself by launching this command
gcloud builds submit --project=$PROJECT_ID --tag $TEMPLATE_IMAGE .
and my run.yaml just contained 1 step, the last one, and everything worked fine
But i am trying to see if i can do everything in the yaml file
Could anyone advise on what might be incorrect? I dont have much experience with yaml files for cloud build
thanks and regards
Marco
I guess the pipeline does not work because (in the second step) the container: gcr.io/cloud-builders/docker2 does not exist (check https://gcr.io/cloud-builders/ - there is a docker container, but not a docker2one).
This second step pushes the final container to the registry and, it is a dependence of the third step, which will fail too.
You can build the container and push it to the container registry in just one step:
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/$IMAGE_NAME', '<path_to_docker-file>']
images: ['gcr.io/$PROJECT_ID/$IMAGE_NAME']
Ok, sorted, the problem was the way i was launching the build command
this is the original
gcloud beta builds submit \
--config run.yaml \
--substitutions _REGION=$REGION \
--substitutions _FMPKEY=$FMPKEY \
--no-source
apparently when i removed the --no-source all worked fine.
I think i copied and pasted the command without really understanding it
regards

My cloudbuild.yaml is failing. Please review my cloudbuild.yaml

I am trying to set a react app to a kubernetes cluster. All my kubernetes files resides in k8s/ folder. In k8s/ folder I have a deployment.yaml and service.yaml file.
The below is my cloudbuild.yaml file which resides in the root folder. This part gcr.io/cloud-builders/kubectl Stage 3 is failing. I get the below error
build step 2 "gcr.io/cloud-builders/kubectl" failed: step exited with non-zero status: 1
steps:
# Build the image - Stage 1
- name: 'gcr.io/cloud-builders/docker'
args: ['build','-t','gcr.io/${_PROJECT}/${_CONTAINERNAME}:${_VERSION}','.']
timeout: 1500s
# Push the image - Stage 2
- name: 'gcr.io/cloud-builders/docker'
args: ['push','gcr.io/${_PROJECT}/${_CONTAINERNAME}:${_VERSION}']
# Deploy changes to kubernetes config files - Stage 3
- name: "gcr.io/cloud-builders/kubectl"
args: ["apply", "-f", "k8s/"]
env:
- 'CLOUDSDK_COMPUTE_ZONE=${_ZONE}'
- 'CLOUDSDK_CONTAINER_CLUSTER=${_GKE_CLUSTER}'
# These are variable substitutions
substitutions:
#GCP Specific configuration. Please DON'T change anything
_PROJECT: my-projects-121212
_ZONE: us-central1-c
_GKE_CLUSTER: cluster-1
#Repository Specific configuration. DevOps can change this settings
_DEPLOYMENTNAME: react
_CONTAINERNAME: react
_REPO_NAME: react-app
# Developers ONLY change
_VERSION: v1.0
options:
substitution_option: 'ALLOW_LOOSE'
machineType: 'N1_HIGHCPU_8'
timeout: 2500s
In step 3, there are double quotes name: "gcr.io/cloud-builders/kubectl"
If you replace them with single quotes, the issue should be fixed.

Do you know any way of configuring cloud run to set a cloudsql from a cloudbuild.yaml?

I'm trying to connect a cloudsql instance to a cloud run service in a safer way than setting the postgres db to be public: to do so, [1]I read it can be done through the CLI, but isn't it better managing the configuration using files rather than commands? Because, in this case, I will have to update the image every time the cloud build trigger build it, isn't it? So' I'm thinking on including --set-cloudsql-instances in the cloudbuild.yaml with the following code, but after running the logs say this --set-cloudsql-instances INSTANCE_NAME is an invalid argument. Can you give me any advice on setting this?
Thanks in advance.
gcloud beta run deploy $PROJECT --image $IMAGE_NAME --platform=managed --region us-central1 --project $PROJECT --set-cloudsql-instances $PROJECT-db
steps:
- name: gcr.io/cloud-builders/docker
args:
- build
- '--no-cache'
- '-t'
- '$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
- .
- '-f'
- Dockerfile
id: Build
- name: gcr.io/cloud-builders/docker
args:
- push
- '$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
id: Push
- name: gcr.io/google.com/cloudsdktool/cloud-sdk
args:
- run
- services
- update
- $_SERVICE_NAME
- '--platform=managed'
- '--image=$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
- >-
--labels=managed-by=gcp-cloud-build-deploy-cloud-run,commit-sha=$COMMIT_SHA,gcb-build-id=$BUILD_ID,gcb-trigger-id=$_TRIGGER_ID,$_LABELS
- '--region=$_DEPLOY_REGION'
- '--quiet'
- '--set-cloudsql-instances $PROJECT_ID:$_DEPLOY_REGION:INSTANCE_NAME'
id: Deploy
entrypoint: gcloud
images:
- '$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
options:
substitutionOption: ALLOW_LOOSE
substitutions:
Add an equal and you need to define the INSTANCE_NAME variable in your Cloudbuild (I noted it as substitution variable $_INSTANCE_NAME)
- '--set-cloudsql-instances=$PROJECT_ID:$_DEPLOY_REGION:$_INSTANCE_NAME'

How to mount a data volume onto an added node on a Jelastic environment

I want to create a Jelastic environment with a load balancer and a cp node. I want to add the cp node with the addNodes api method, because it needs specific data to start. My manifest looks like this:
jpsVersion: 1.3
jpsType: install
application:
id: test-app
name: Test App
version: 0.0
env:
topology:
nodes:
- nodeGroup: bl
nodeType: nginx-dockerized
tag: 1.16.1
displayName: Node balancing
count: 1
fixedCloudlets: 1
cloudlets: 4
onInstall:
- addFile
- setup
actions:
addFile:
- cmd [bl]:
- mkdir /data
- echo "Hello world" > /data/test.txt
user: root
setup:
- addNodes:
- nodeGroup: cp
nodeType: docker
displayName: Test Mount
count: 1
fixedCloudlets: 1
cloudlets: 4
dockerName: alpine
volumeMounts:
/kickstart:
readOnly: true
sourcePath: /data
sourceNodeGroup: bl
dockerVolumes:
- /kickstart
For some reason, I want my alpine image to be provided with the data I am storing in the folder /kickstart. Of course, in that case, it's completely irrelevant. The example above is just kept simple enough to be reproducible. In my real use-case, the docker image I want to mount will not be able to run without some application-specific data that are filled up with the manifest's settings completed upon user input. That's why it is necessary that the data be available upon docker node addition.
My problem is that the above simple manifest does not work. Indeed, on my docker node, I have no access to /kickstart/test.txt folder. What am I doing wrong?
volumeMounts option is not available for action addNodes
Here is an example how to implement it:
type: install
name: Test App
​
nodes:
- nodeGroup: bl
nodeType: nginx
tag: 1.16.1
displayName: Node balancing
cloudlets: 4
​
- nodeGroup: cp
displayName: Test Mount
cloudlets: 4
image: alpine
startServiceOnCreation: false
volumes:
- /kickstart
volumeMounts:
/kickstart:
sourcePath: /data
sourceNodeGroup: bl
readOnly: true
onInstall:
- cmd [bl]: |-
echo "Hello world" > /data/test.txt
user: root
- env.control.ExecDockerRunCmd [${nodes.cp.join(id,)}]
also you can use non-existent directories in volumeMounts
it will create everything by itself

How to use a variable docker image in github-actions?

I am trying to write a custom github-action that runs some commands in a docker container but allows the user to select which docker container they are run in (i.e. so I can run the same build instructions across different versions of the runtime environment)
My gut instinct was to have my .github/actions/main/action.yml file as
name: 'Docker container command execution'
inputs:
dockerfile:
default: Dockerfile_r_latest
runs:
using: 'docker'
image: '${{ inputs.dockerfile }}'
args:
- /scripts/commands.sh
However this errors with:
##[error](Line: 7, Col: 10): Unrecognized named-value: 'inputs'. Located at position 1 within expression: inputs.dockerfile
Any help would be appreciated !
File References
My .github/workflow/build_and_test.yml file is:
name: Test Package
on:
[push, pull_request]
jobs:
R_latest:
name: Test on latest
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#master
name: Checkout project
- uses: ./.github/actions/main
name: Build and test
with:
dockerfile: Dockerfile_r_latest
And my Dockerfile .github/actions/main/Dockerfile_r_latest is:
FROM rocker/verse:latest
ADD scripts /scripts
ENTRYPOINT [ "bash", "-c" ]
Interesting approach! I'm not sure if it's possible to use expressions in the image field of the action metadata. I would guess that the only fields that can take expressions instead of hardcoded strings are the args for the image so that the inputs can be passed.
For reference this is the args section of the action.yml metadata.
https://help.github.com/en/articles/metadata-syntax-for-github-actions#args
I think there are other ways to achieve what you want to do. Have you tried using the jobs.<job_id>.container syntax? That allows you to specify an image that the steps of a job will run in. It will require that you publish the image to a public repository, though. So take care not to include any secrets.
For example, if you published your image to Docker Hub at gowerc/r-latest your workflow might look something like this:
name: Test Package
on:
[push, pull_request]
jobs:
R_latest:
name: Test on latest
runs-on: ubuntu-latest
container: gowerc/r-latest
steps:
- uses: actions/checkout#master
name: Checkout project
- name: Build and test
run: ./scripts/commands.sh
ref: https://help.github.com/en/articles/workflow-syntax-for-github-actions#jobsjob_idcontainer
Alternatively, you can also specify your image at the step level with uses. You could then pass a command via args to execute your script.
name: my workflow
on: push
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#master
- name: Check container
uses: docker://alpine:3.8
with:
args: /bin/sh -c "cat /etc/alpine-release"
ref: https://help.github.com/en/github/automating-your-workflow-with-github-actions/workflow-syntax-for-github-actions#example-using-a-docker-hub-action
In addition to #peterevans answer, I would add there's a 3rd option where you can use a simple docker run command and pass any env that you have defined.
That helped to solve 3 things :
Reuse a custom docker image being build within the steps for testing actions. It seems not possible to do so with uses as it first tries to pull that image that doesn't exist yet in a Setup job step that occurs before any steps of the job.
This specific image can also be stored in a private docker registry
Be able to use a variable for the docker image
My workflow looks like this :
name: Build-Test-Push
on:
push:
branches:
- master
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
ECR_REGISTRY: ${{ secrets.AWS_ECR_REGISTRY }}
ECR_REPOSITORY: myproject/myimage
IMAGE_TAG: ${{ github.sha }}
jobs:
build-and-push:
runs-on: ubuntu-latest
steps:
- name: Checking out
uses: actions/checkout#v2
with:
ref: master
- name: Login to AWS ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login#v1
- name: Build
run: |
docker pull $ECR_REGISTRY/$ECR_REPOSITORY || true
docker build . -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG -t $ECR_REGISTRY/$ECR_REPOSITORY:latest
- name: Test
run: |
docker run $ECR_REGISTRY/$ECR_REPOSITORY:latest /bin/bash -c "make test"
- name: Push
run: |
docker push $ECR_REGISTRY/$ECR_REPOSITORY
Here is another approach. The Docker image to use is passed to a cibuild shell script that takes care of pulling the right image.
GitHub workflow file:
name: 'GH Actions CI'
on:
push:
branches: ['*master', '*0.[0-9]?.x']
pull_request:
# The branches below must be a subset of the branches above
branches: ['*master', '*0.[0-9]?.x']
jobs:
build:
name: Build
runs-on: ubuntu-latest
strategy:
fail-fast: true
matrix:
include:
- FROM: 'ubuntu:focal'
- FROM: 'ubuntu:bionic'
- FROM: 'ubuntu:xenial'
- FROM: 'debian:buster'
- FROM: 'debian:stretch'
- FROM: 'opensuse/leap'
- FROM: 'fedora:33'
- FROM: 'fedora:32'
- FROM: 'centos:8'
steps:
- name: Checkout repository
uses: actions/checkout#v2
with:
# We must fetch at least the immediate parents so that if this is
# a pull request then we can checkout the head.
fetch-depth: 2
# If this run was triggered by a pull request event, then checkout
# the head of the pull request instead of the merge commit.
- run: git checkout HEAD^2
if: ${{ github.event_name == 'pull_request' }}
- name: Run CI
env:
FROM: ${{ matrix.FROM }}
run: script/cibuild
Bash script script/cibuild:
#!/bin/bash
set -e
docker run --name my-docker-container $FROM script/custom-script.sh
docker cp my-docker-container:/usr/src/my-workdir/my-outputs .
docker rm my-docker-container
echo "cibuild Done!"
Put your custom commands in script/custom-script.sh.

Resources