How can I use enviroments variables in Cloud Run whit continuous implementation? - docker

I am using Cloud Run and I want to active the continued implementation whit Github but obviously, I can't upload my env variables so, what can I use
I can't put It when I use "Implement and edit a new version" because it doesn't go to continue, I have to open It click it, and fill the env
I can't use ENV on my Dockerfile because I have to upload it on my Github
I can't use replace it on cloud Build because I am using a Dockerfile and this option is only for cloudbuild.yml (and I don't know how to create it I only know docker :)
Maybe I can edit the yalm on Cloud run I I am not sure if that is a good option
Maybe I can pass if I use gcloud build but I have to click on "Implement and edit a new version" and It is not continuous implementation
My Dockerfile if you want to help me to transform it on a cloudbuild.yml
FROM node:15
WORKDIR /app
COPY package*.json ./
ENV ENV production
ENV PORT 3000
ENV API_URL https://api.mysite.com
RUN npm install --only=production
COPY . .
RUN npm run build
CMD ["npm", "start"]

On google documentation, I found how to create the cloudbuild.yalm to continuous integration
steps:
# Build the container image
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/api:$COMMIT_SHA', '.']
# Push the container image to Container Registry
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/$PROJECT_ID/api:$COMMIT_SHA']
# Deploy container image to Cloud Run
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: gcloud
args:
- 'run'
- 'deploy'
- 'api'
- '--image'
- 'gcr.io/$PROJECT_ID/api:$COMMIT_SHA'
- '--region'
- 'us-east1'
- '--platform'
- 'managed'
images:
- 'gcr.io/$PROJECT_ID/api:$COMMIT_SHA'
You have to change API for the name of your service
After, I put on "Implement and edit a new version" and put the environment variables
And all the continuous implementations going to have the same environments variables that I put when I implement a new version.

You're not passing on any environment variables into the service.
gcloud beta run deploy --help check for --set-env-vars.
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: gcloud
args:
- 'run'
- 'deploy'
- 'api'
- '--image'
- 'gcr.io/$PROJECT_ID/api:$COMMIT_SHA'
- '--region'
- 'us-east1'
- '--platform'
- 'managed'
- '--set-env-vars'
- 'API_URL=${_API_URL}'
You can use substitutions in the build trigger: https://cloud.google.com/build/docs/configuring-builds/substitute-variable-values

Related

GCP Cloud Build via trigger: Dockerfile not found

I attempt to trigger building a Docker image on GCP Cloud Build via webhook called from Gitlab. The webhook works, but the build process stops when I run docker build with this error:
unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /workspace/Dockerfile: no such file or directory
The YAML for this step is:
- name: gcr.io/cloud-builders/docker
args:
- build
- '-t'
- '${_ARTIFACT_REPO}'
- .
where I later supply the variable _ARTIFACT_REPO via substitutions.
My Gitlab repo includes the Dockerfile on the root level. So the repo structure is:
app/
.gitignore
Dockerfile
README.md
requirements.txt
The error message indicates that the Dockerfile cannot be found, but I do not understand why this is the case. Help is much appreciated!
Just solved the issue:
I followed the GCP docs (link) where they include these two steps in the cloudbuild:
- name: gcr.io/cloud-builders/git
args:
- clone
- '-n'
- 'git#gitlab.com/GITLAB_REPO'
- .
volumes:
- name: ssh
path: /root/.ssh
- name: gcr.io/cloud-builders/git
args:
- checkout
- $_TO_SHA
As I did not require a specific checkout, I deleted the second step of those but I overlooked the -n flag in the first step that prevents checking out the cloned repo.
So I just deleted the - '-n' and issue solved.

How to build Nx monorepo apps in Gitlab CI Runner

I am trying to have a gitlab CI that performs the following actions:
Install yarn dependencies and cache them in order to don't have to yarn install in every jobs
Test all of my modified apps with the nx affected command
Build all of my modified apps with the nx affected command
Build my docker images with my modified apps
I tried many ways to do it in my CI and no one of them worked. I'm very stuck actually.
This is my actual CI :
default:
image: registry.gitlab.com/xxxx/xxxx/xxxx
stages:
- setup
- test
- build
- forge
.distributed:
interruptible: true
only:
- main
- develop
cache:
key:
files:
- yarn.lock
paths:
- node_modules
- .yarn
before_script:
- yarn install --cache-folder .yarn-cache --immutable --immutable-cache --check-cache
- NX_HEAD=$CI_COMMIT_SHA
- NX_BASE=${CI_MERGE_REQUEST_DIFF_BASE_SHA:-$CI_COMMIT_BEFORE_SHA}
artifacts:
paths:
- node_modules
test:
stage: test
extends: .distributed
script:
- yarn nx affected --base=$NX_BASE --head=$NX_HEAD --target=test --parallel=3 --ci --code-coverage
build:
stage: build
extends: .distributed
script:
- yarn nx affected --base=$NX_BASE --head=$NX_HEAD --target=build --parallel=3
forge-docker-landing-staging:
stage: forge
services:
- docker:20.10.16-dind
rules:
- if: $CI_COMMIT_BRANCH == "develop"
allow_failure: true
- exists:
- "dist/apps/landing/*"
allow_failure: true
script:
- docker build -f Dockerfile.landing -t landing:staging .
Currently here is what works and what doesn't :
❌ Caching don't work, it's doing yarn install in every jobs that got extends: .distributed
✅ Nx affected commands work as expected (test and build)
❌ Building the apps with docker is not working, i have some trouble with docker in docker.
Problem #1: You don't cache your .yarn-cache directory, while you explicitly set in in your yarn install in before_script section. So solution is simple - add .yarn-cache to your cache.paths section
Regarding
it's doing yarn install in every jobs that got extends: .distributed
It is intended behavior in your pipeline, since "extends" basically merges sections of your gitlab-ci config, so test stage basically uses the following bash script in runner image:
yarn install --cache-folder .yarn-cache --immutable --immutable-cache --check-cache
NX_HEAD=$CI_COMMIT_SHA
NX_BASE=${CI_MERGE_REQUEST_DIFF_BASE_SHA:-$CI_COMMIT_BEFORE_SHA}
yarn nx affected --base=$NX_BASE --head=$NX_HEAD --target=test --parallel=3 --ci --code-coverage
and build stage differs only in one last line
When you'll cache your build folder - install phase will be way faster.
Also in this case
artifacts:
paths:
- node_modules
is not needed, since it will come from cache. Removing it from artifacts will also ease the load on your gitlab instance, node_modules is usually huge and doesn't really make sense as an artifact.
Problem #2: What is your artifact?
You haven't provided your dockerfile or any clue on what is exactly produced by your build steps, so i assume your build stage produces something in dist directory. If you want to use that in your docker build stage - you should specify it in artifacts section of your build job:
build:
stage: build
extends: .distributed
script:
- yarn nx affected --base=$NX_BASE --head=$NX_HEAD --target=build --parallel=3
artifacts:
paths:
- dist
After that, your forge-docker-landing-staging job will have an access to your build artifacts.
Problem #3: Docker is not working!
Without any logs from your CI system, it's impossible to help you, and also violates SO "one question per question" policy. If your other stages are running fine - consider using kaniko instead of docker in docker, since using DinD is actually a security nightmare (you are basically giving root rights on your builder machine to anyone, who can edit .gitlab-ci.yml file). See https://docs.gitlab.com/ee/ci/docker/using_kaniko.html , and in your case something like job below (not tested) should work:
forge-docker-landing-staging:
stage: forge
image:
name: gcr.io/kaniko-project/executor:v1.9.0-debug
entrypoint: [""]
rules:
- if: $CI_COMMIT_BRANCH == "develop"
allow_failure: true
- exists:
- "dist/apps/landing/*"
allow_failure: true
script:
- /kaniko/executor
--context "${CI_PROJECT_DIR}"
--dockerfile "${CI_PROJECT_DIR}/Dockerfile.landing"
--destination "${CI_REGISTRY_IMAGE}:landing:staging"

Does Google Cloud Build keep docker images between steps by default?

Does Google Cloud Build keep docker images between build steps by default?
In their docs they say built images are discarded after every step but I've seen examples in which build steps use images produced in previous build ones! so, are built images discarded on completion of every step or saved somewhere for coming steps?
Here's my cloudbuild.yaml.
steps:
- name: gcr.io/cloud-builders/docker
args:
- build
- '-t'
- '${_ARTIFACT_REPO}'
- .
- name: gcr.io/cloud-builders/docker
args:
- push
- '${_ARTIFACT_REPO}'
- name: gcr.io/google.com/cloudsdktool/cloud-sdk
args:
- run
- deploy
- my-service
- '--image'
- '${_ARTIFACT_REPO}'
- '--region'
- us-central1
- '--allow-unauthenticated'
entrypoint: gcloud
Yes, Cloud Build keep images between steps.
You can imagine Cloud Build like a simple VM or your local computer so when you build an image it is stored in local (like when you run docker build -t TAG .)
All the steps run in the same instance so you can re-use built images in previous steps in other steps. Your sample steps do not show this but the following do:
steps:
- name: 'gcr.io/cloud-builders/docker'
args:
- build
- -t
- MY_TAG
- .
- name: 'gcr.io/cloud-builders/docker'
args:
- run
- MY_TAG
- COMMAND
- ARG
As well use the previous built image as part of an step:
steps:
- name: 'gcr.io/cloud-builders/docker'
args:
- build
- -t
- MY_TAG
- .
- name: 'MY_TAG'
args:
- COMMAND
- ARG
All the images built are available in your workspace until the build is done (success or fail).
P.D. The reason I asked where did you read that the images are discarded after every step is because I've not read that in the docs unless I missed something, so if you have the link to that please share it with us.

Google AppEngine ENV variables from Google Cloud Build Dockerfile

So I have a CloudBuild trigger that builds my cloudbuild.yaml file and this is all fine and dandy. I also use the gcloud builder to run docker commands to pass ENV variables to my Dockerfile. for example:
steps:
- name: 'gcr.io/$PROJECT_ID/swift:4.2'
args: ['test']
id: 'Running unit tests'
- name: 'gcr.io/cloud-builders/docker'
args: ['build','--build-arg', 'PROJECT=$PROJECT_ID','-t', 'us.gcr.io/$PROJECT_ID/$BRANCH_NAME:$SHORT_SHA', '.']
id: 'Building docker image'
- name: 'gcr.io/cloud-builders/docker'
args: ["push", "us.gcr.io/$PROJECT_ID/$BRANCH_NAME:$SHORT_SHA"]
id: 'Pushing built image to registry'
- name: 'gcr.io/cloud-builders/gcloud'
args: ['app', 'deploy']
id: 'Deploying to AppEngine'
timeout: 1800s # 30 minute timeout
As you can see I, I'm using the ENV variables that all GCP resources have by default.($PROJECT_ID for example). And in the docker command I'm passing it as an argument so I can use the ARG command in the dockerfile:
ARG PROJECT
FROM gcr.io/${PROJECT}/swift:4.2 as builder
WORKDIR /App
#Other commands....
Now all of this works fine and I'm able to build my image etc. now I want to deploy to app engine in the final step.
Only problem is that I'm using the same Dockerfile to uses the swift:4.2 base image that's only located in my GoogleContainerRegistry so I need the $PROJECT_ID for my project to pull that.
My question is: Is there any way to have AppEngine build environment pass arguments to the docker build that builds my image when deploying? I have an app.yaml file and I know there's an env_variables: property and I know I'd be able to use the docker ARG or ENV command (can't remember which one) to get my $PROJECT_ID inside my Dockerfile. But the only problem is AppEngine doesn't have that Property defined as far as I know. The only other thing I can think of is to echo the $PROJECT_ID from Cloud Builder step to the end of the app.yaml file. But if there's a cleaner approach I'd love to hear about it. Thanks!
I think I've found a solution for my needs.
gcloud app deploy has a flag image-url that can specify an already built image rather than rebuilding the Dockerfile. So I went with this as my final cloudbuild.yaml
- name: 'gcr.io/cloud-builders/gcloud'
args: ['app', 'deploy', '--image-url', 'gcr.io/$PROJECT_ID/$BRANCH_NAME:$SHORT_SHA']
Basically point to the image I just built and pushed to my container registry.

Integrating docker with gitlab-ci - how does the docker image get built and used?

I was previously using the shell for my gitlab runner to build my project. So far I have set up the pipeline that will run whatever commands I have set in the gitlab-ci.yml file seen below:
gitlab-ci.yml using shell runner
before_script:
- npm install
- npm install --save #angular/material #angular/cdk
cache:
paths:
- node_modules/
stages:
- dev
- staging
- production
build_dev:
stage: dev
script:
- rm ./package-lock.json
- npm run build
- ./node_modules/#angular/cli/bin/ng test --browsers PhantomJS --watch false
Now, I want to switch to a docker image. I have reconfigured the runner to use a docker image, and I specified the image in my new gitlab-ci.yml file seen below. I followed the gitlab-ci docker tutorial and this is where it left off so I'm not entirely sure where to go from here:
gitlab-ci.yml using docker runner
image: node:8.10.0
before_script:
- npm install
- npm install --save #angular/material #angular/cdk
cache:
paths:
- node_modules/
stages:
- dev
- staging
- production
build_dev:
stage: dev
script:
- rm ./package-lock.json
- npm run build
- ./node_modules/#angular/cli/bin/ng test --browsers PhantomJS --watch false
Questions:
With my current gitlab-ci.yml file, how does this build a docker image/does it even build one? If it does, what does that mean? Currently the pipeline passed, but I have no idea if it did in a docker image or not (am I supposed to be able to tell?).
Also, let's say the docker image was created, ran the tests, and the pipeline passed; it should push the code to a new repository (not included in yml file yet). From what I gathered, the image isn't being pushed, it's just the code, right? So what do I do with this created docker image?
How does the Dockerfile get used? I see no link between the gitlab-ci.yml file and Dockerfile.
Do I need to surround all commands in the gitlab-ci.yml file in docker run <commands> or docker exec <commands>? Without including one of these 2 commands, it seems like it would just run on the server and not in a docker image.
I've seen people specify an image in both the gitlab-ci.yml file and Dockerfile. I have an angular project, and I specified an image of image: node:8.10.0. In the Dockerfile, should I specify the same image? I've seen some projects where they are completely different and I'm wondering what the use of both images are/if picking one image over another will severely impact my builds.
You have to take a different approach on building your app if you want to fully dockerize it. Export angular things into Dockerfile and get docker operations inside your .gitlab-ci instead of angular stuff like here:
stages:
- build
# - release
# - deploy
.build_template: &build_definition
stage: build
image: docker:17.06
services:
- docker:17.06-dind
script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
- docker pull $CONTAINER_RELEASE_IMAGE || true
- docker build --cache-from $CONTAINER_RELEASE_IMAGE -t $CONTAINER_IMAGE -f $DOCKERFILE ./
- docker push $CONTAINER_IMAGE
build_app_job:
<<: *build_definition
variables:
CONTAINER_IMAGE: $CI_REGISTRY_IMAGE/app:$CI_COMMIT_REF_SLUG
CONTAINER_RELEASE_IMAGE: $CI_REGISTRY_IMAGE/app:latest
DOCKERFILE: ./Dockerfile.app
build_nginx_job:
<<: *build_definition
variables:
CONTAINER_IMAGE: $CI_REGISTRY_IMAGE/nginx:$CI_COMMIT_REF_SLUG
CONTAINER_RELEASE_IMAGE: $CI_REGISTRY_IMAGE/nginx:latest
DOCKERFILE: ./Dockerfile
You can set up a few build jobs - for production, development, staging etc.
Right next to your .gitlab-ci.yaml you can put Dockerfile and Dockerfile.app - Dockerfile.app stands for building you angular app:
FROM node:10.5.0-stretch
RUN mkdir -p /usr/src/app
RUN mkdir -p /usr/src/remote
WORKDIR /usr/src/app
COPY . .
# do your commands here
Now with your app built, it can be served via a web server - it's your choice and a different configuration that follows with each choice - cant even scratch a surface here. That'd be implemented in Dockerfile - we usually use Nginx in our company.
From here on it's about releasing your images and deploying them. I've only specified how to build them in docker as it seems this is what the question is about.
If you want to deploy your image and run it somewhere - choose a provider - AWS, Heroku, own infrastructure - have it your way, but this is far too much to cover all those in a single answer so I'll leave it for another question when you specify where'd you like to deploy your newly built images and how would you like to serve it. In our company, we orchestrate things with Rancher, but there are multiple awesome and competing options in the market.
Edit for a custom registry
The above .gitlab-ci configuration works with Gitlab's "internal" registry only, in case you want to utilize your own registry, change the values accordingly:
#previous configs
script:
- docker login -u mysecretlogin -p mysecretpasswd registry.local.com
# further configs
from -u gitlab-ci-token to your login in the registry,
from $CI_JOB_TOKEN to your password
from $CI_REGISTRY to your registry address
Those values should be stored in Gitlab's CI secret variables and referenced via env variables so that they are not saved in the repository.
Finally, your script might look like below in case you decided to protect these values. Refer to Gitlab's official docs on how to add secret CI variables - super easy task.
#previous configs
script:
- docker login -u $registrylogin -p $registrypasswd $registryaddress
# further configs

Resources