How to Clear Kaniko Cache on Build for Cloud Run - google-cloud-run

So we've updated the dockerfile, and would like to build w/o using the old Kaniko Cache, but want to replace it at the same time.
How to force it to build new cache layers?
gcloud config set builds/use_kaniko True
gcloud beta builds submit --tag="gcr.io/${PROJECT_NAME}/${name}" --timeout="2h" --machine-type="n1-highcpu-32"

Turns out the --no-cache option will also replace the existing Kaniko cache
gcloud config set builds/use_kaniko True
gcloud beta builds submit --tag="gcr.io/${PROJECT_NAME}/${name}" --timeout="2h" --machine-type="n1-highcpu-32" --no-cache

Related

How to set SDK release when using Bitbucket repository integration

I have installed Bitbucket integration on sentry and used the Bitbucket pipeline to automatically notify and associate releases with commits as described here
I have also set up source maps to be uploaded as seen below:
sentry-cli releases files $BITBUCKET_COMMIT upload-sourcemaps build
The Bitbucket pipeline and the source map upload both use the $BITBUCKET_COMMIT as the identifier.
I am trying to figure out how to configure the SDK release to use this variable as my current set up is below:
if (process.env.NODE_ENV.toString().toLowerCase() === 'production') {
Sentry.init({
dsn: process.env.REACT_APP_SENTRY_DSN,
});
}
I found out how to do this. The BITBUCKET_COMMIT is an environment variable available in the bitbucket pipeline during build so I made this available to my docker container by passing it as an argument in the Docker build step.
docker build --build-arg release=$BITBUCKET_COMMIT
I could then make the passed variable available to my React build command through the DockerFile
//DockerFile
ENV BITBUCKET_COMMIT=$release
Then within my package.json I set the variable during build
"build": "REACT_APP_SENTRY_DSN=$BITBUCKET_COMMIT"

Helm Set Docker Image Tag Dynamically

I am pushing Docker images to our private registry via Jenkins with the following command:
def dockerImage = docker.build("repo/myapp:${env.BUILD_NUMBER}")
(BUILD_NUMBER increases after every build.)
Because I am new to using Helm, I could not decide how should I give the tag for images in values.yaml.
I would like to deploy my app to multiple environments such as:
dev
test
prod
Let's say I was able to deploy my app via Helm to dev, and the latest BUILD_NUMBER is:
100 for dev
101 for test
102 for prod
What should be the tag value, then?
image:
repository: registryt/myrepo/image
tag:
You should put "some" tag into your values.yaml which will act as the default tag. Each Helm Chart has it, you can check the official Helm Charts here.
Now, you have two options on how to act with the different environments.
Option 1: Command line parameters
While installing your Helm Chart, you can specify the tag name dynamically with --set. For example:
$ helm install --set image.tag=12345 <your-chart-name>
Option 2: Separate values.yaml files
You can store separate values.yaml in your repository, like:
values.dev.yaml
values.prod.yaml
Then, update the correct values in your Jenkins pipeline.
I just ran into this issue with GitHub Actions. As the other answer already noted, set the image.tag in values.yaml to something as a default. I use latest as the default.
The problem is that the helm upgrade command only upgrades if the image tag is different. So "latest" isn't unique enough for Helm to do the rolling update. I use a SHA in GitHub Actions as my unique version tag. You could tag the image like this:
def dockerImage = docker.build("repo/myapp:${env.NAME}-${env.BUILD_NUMBER}")
Then in your helm command just add a --set:
helm upgrade <helm-app> <helm-chart> --set image.tag=${env.NAME}-${env.BUILD_NUMBER}
This command will override whatever value is in values.yaml. Keep in mind that the --set value must follow the structure of your values.yaml so in this case image is a top level object, with a property named tag:
values.yaml
image
name
tag
pullPolicy
port
replicaCount
May be its late, but hope helps someone later with this similar query.
I had similar situation and was looking for some options. It was painful as helm3 package doesn't come with --set option, which exists in version 2.
Solution:
Implemented with Python jinja packages, along with environment variables, with below steps.
Create values.yaml.j2 file inside your chart directory, with your values file along with templates as per below.
name: {{ APPLICATION | default("SampleApp") }}
labelname: {{ APPLICATION | default("SampleApp") }}
image:
imageRepo: "SampleApp"
imageTag: {{ APPLICATION_IMAGE_TAG | default("1.0.27") }}
Dependency packages (in container):
sh 'yum -y install python3'
sh 'yum -y install python3-pip'
sh 'yum -y install python-setuptools'
sh 'python3 -m pip install jinja-cli'
Sample Environment Variables In your Build Pipeline:
APPLICATION= "AppName"
APPLICATION_VERSION= '1.0'
APPLICATION_CHART_VERSION= '1.0'
APPLICATION_IMAGE_TAG= "1.0.${env.BUILD_NUMBER}"
Now in your pipeline, before packaging chart, Apply/Replace the templates with one jinja command as like below.
sh "jinja CHART_DIR/values.yaml.j2 -X APP.* -o CHART_DIR/values.yaml"
helm package CHART_DIR
Done!

Get latest tag in .gitlab-ci.yml for Docker build

I want to add a tag when building a Docker image, I'm doing this so far but I do not know how to get the latest tag on the repository being deployed.
docker build -t company/app .
My goal
docker build -t company/app:$LATEST_TAG_IN_REPO? .
Since you're looking for the "latest" git tag which is an ancestor of the currently building commit you probably want to use
git describe --tags --abbrev=0
to get it and use it like:
docker build -t company/app:$(git describe --tags --abbrev=0) .
Read here for the finer points on git describe
You can try using $CI_COMMIT_TAG or $CI_COMMIT_REF_NAME, this is part of the predefined variables accessible during builds.
If you want to see what are all the available environment variables during build step this should work as one of your jobs:
script:
- env
The pre-defined variable $CI_COMMIT_TAG is empty if no git tag points to the current commit in the pipeline. If you either want the current tag or the current SHA of the commit (as a fallback), you can use IMAGE_VERSION=${CI_COMMIT_TAG:-"$CI_COMMIT_SHORT_SHA"}

Build docker image, tag with built image id, push the image

Background: Running Kubernetes on Google Cloud.
Because Kubernetes won't tolerate :latest tag for Rolling Updates, I'd find something like this useful.
docker build . -t gcr.io/project/nginx:{built_image_id} && docker push gcr.io/project/nginx:{built_image_id}
I saw a blog post about using git commit hash as a tag. Any other alternatives to skip the "copy git hash step"?
Thanks 😊
According to Kubernetes documentations:
“ Doing a rolling update from image:latest to a new image:latest will fail, even if the image at that tag has changed. Moreover, the use of :latest is not recommended,”
They provided some best practices for configuration to help, that you can check in the following link and use as a guide.
From Denis's answer. I got this, which should do the job.
docker build . -t gcr.io/project/nginx:$(git rev-parse --short HEAD) && docker push gcr.io/project/nginx:$(git rev-parse --short HEAD)

Trigger step in Bitbucket pipelines

I have a CI pipeline in Bitbucket which is building, testing and deploying an application.
The thing is that after the deploy I want to run selenium tests.
Selenium tests are in an another repository in Bitbucket and they have their own pipeline.
Is there a trigger step in the Bitbucket pipeline to trigger a pipeline when a previous one has finished?
I do not want to do a fake push to the test repository to trigger those tests.
The most "correct" way I can think of doing this is to use the Bitbucket REST API to manually trigger a pipeline on the other repository, after your deployment completes.
There are several examples of how to create a pipeline here: https://developer.atlassian.com/bitbucket/api/2/reference/resource/repositories/%7Bworkspace%7D/%7Brepo_slug%7D/pipelines/#post
Copy + pasting the first example. How to trigger a pipeline for the latest commit on master:
$ curl -X POST -is -u username:password \
-H 'Content-Type: application/json' \
https://api.bitbucket.org/2.0/repositories/jeroendr/meat-demo2/pipelines/ \
-d '
{
"target": {
"ref_type": "branch",
"type": "pipeline_ref_target",
"ref_name": "master"
}
}'
according to their official documentation there is no "easy way" to do that, cause the job are isolated in scope of one repository, yet you can achieve your task in following way:
create docker image with minimum required setup for execution of your tests inside
upload to docker hub (or some other repo if you have such)
use docker image in last step of you pipeline after deploy to execute tests
Try out official component Bitbucket pipeline trigger: https://bitbucket.org/product/features/pipelines/integrations?p=atlassian/trigger-pipeline
You can run in after deploy step
script:
- pipe: atlassian/trigger-pipeline:4.1.7
variables:
BITBUCKET_USERNAME: $BITBUCKET_USERNAME
BITBUCKET_APP_PASSWORD: $BITBUCKET_APP_PASSWORD
REPOSITORY: 'your-awesome-repo'
ACCOUNT: 'teams-in-space'
#BigGinDaHouse I did something more or less like you say.
My step is built on top of docker image with headless chrome, npm and git.
I did follow the steps below:
I have set private key for the remote repo in the original repo. Encoded base 64. documentation. The public key is being set into the remote repo in SSH Access option in bitbucket menu.
In the pipeline step I am decoding it and setting it to a file. I am also changing its permission to be 400.
I am adding this Key inside the docker image. ssh-add
Then I am able to do a git clone followed by npm install and npm test
NOTE: The entry.sh is because I am starting the headless browser.
- step:
image: kimy82/headless-selenium-npm-git
script:
- echo $key_in_env_variable_in_bitbucket | base64 --decode > priv_key
- chmod 400 ./priv_key
- eval `ssh-agent -s`
- ssh-agent $(ssh-add priv_key; git clone git#bitbucket.org:project.git)
- cd project
- nohup bash /usr/bin/entry.sh >> out.log &
- npm install
- npm test
Top answers (this and this) are correct, they work.
Just adding that we found out (after a LOT of trial and error) that the user executing the pipeline must have WRITE permissions on the repo where the pipeline is invoked (even though his app password permissions were set to "WRITE" for repos and pipelines...)
Also, this works for executing pipelines in Bitbucket's cloud or on-premise, through local runners.
(Answering as I am lacking reputation for commenting)

Resources