GCP Cloud Build via trigger: Dockerfile not found - docker

I attempt to trigger building a Docker image on GCP Cloud Build via webhook called from Gitlab. The webhook works, but the build process stops when I run docker build with this error:
unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /workspace/Dockerfile: no such file or directory
The YAML for this step is:
- name: gcr.io/cloud-builders/docker
args:
- build
- '-t'
- '${_ARTIFACT_REPO}'
- .
where I later supply the variable _ARTIFACT_REPO via substitutions.
My Gitlab repo includes the Dockerfile on the root level. So the repo structure is:
app/
.gitignore
Dockerfile
README.md
requirements.txt
The error message indicates that the Dockerfile cannot be found, but I do not understand why this is the case. Help is much appreciated!

Just solved the issue:
I followed the GCP docs (link) where they include these two steps in the cloudbuild:
- name: gcr.io/cloud-builders/git
args:
- clone
- '-n'
- 'git#gitlab.com/GITLAB_REPO'
- .
volumes:
- name: ssh
path: /root/.ssh
- name: gcr.io/cloud-builders/git
args:
- checkout
- $_TO_SHA
As I did not require a specific checkout, I deleted the second step of those but I overlooked the -n flag in the first step that prevents checking out the cloned repo.
So I just deleted the - '-n' and issue solved.

Related

How to build Nx monorepo apps in Gitlab CI Runner

I am trying to have a gitlab CI that performs the following actions:
Install yarn dependencies and cache them in order to don't have to yarn install in every jobs
Test all of my modified apps with the nx affected command
Build all of my modified apps with the nx affected command
Build my docker images with my modified apps
I tried many ways to do it in my CI and no one of them worked. I'm very stuck actually.
This is my actual CI :
default:
image: registry.gitlab.com/xxxx/xxxx/xxxx
stages:
- setup
- test
- build
- forge
.distributed:
interruptible: true
only:
- main
- develop
cache:
key:
files:
- yarn.lock
paths:
- node_modules
- .yarn
before_script:
- yarn install --cache-folder .yarn-cache --immutable --immutable-cache --check-cache
- NX_HEAD=$CI_COMMIT_SHA
- NX_BASE=${CI_MERGE_REQUEST_DIFF_BASE_SHA:-$CI_COMMIT_BEFORE_SHA}
artifacts:
paths:
- node_modules
test:
stage: test
extends: .distributed
script:
- yarn nx affected --base=$NX_BASE --head=$NX_HEAD --target=test --parallel=3 --ci --code-coverage
build:
stage: build
extends: .distributed
script:
- yarn nx affected --base=$NX_BASE --head=$NX_HEAD --target=build --parallel=3
forge-docker-landing-staging:
stage: forge
services:
- docker:20.10.16-dind
rules:
- if: $CI_COMMIT_BRANCH == "develop"
allow_failure: true
- exists:
- "dist/apps/landing/*"
allow_failure: true
script:
- docker build -f Dockerfile.landing -t landing:staging .
Currently here is what works and what doesn't :
❌ Caching don't work, it's doing yarn install in every jobs that got extends: .distributed
✅ Nx affected commands work as expected (test and build)
❌ Building the apps with docker is not working, i have some trouble with docker in docker.
Problem #1: You don't cache your .yarn-cache directory, while you explicitly set in in your yarn install in before_script section. So solution is simple - add .yarn-cache to your cache.paths section
Regarding
it's doing yarn install in every jobs that got extends: .distributed
It is intended behavior in your pipeline, since "extends" basically merges sections of your gitlab-ci config, so test stage basically uses the following bash script in runner image:
yarn install --cache-folder .yarn-cache --immutable --immutable-cache --check-cache
NX_HEAD=$CI_COMMIT_SHA
NX_BASE=${CI_MERGE_REQUEST_DIFF_BASE_SHA:-$CI_COMMIT_BEFORE_SHA}
yarn nx affected --base=$NX_BASE --head=$NX_HEAD --target=test --parallel=3 --ci --code-coverage
and build stage differs only in one last line
When you'll cache your build folder - install phase will be way faster.
Also in this case
artifacts:
paths:
- node_modules
is not needed, since it will come from cache. Removing it from artifacts will also ease the load on your gitlab instance, node_modules is usually huge and doesn't really make sense as an artifact.
Problem #2: What is your artifact?
You haven't provided your dockerfile or any clue on what is exactly produced by your build steps, so i assume your build stage produces something in dist directory. If you want to use that in your docker build stage - you should specify it in artifacts section of your build job:
build:
stage: build
extends: .distributed
script:
- yarn nx affected --base=$NX_BASE --head=$NX_HEAD --target=build --parallel=3
artifacts:
paths:
- dist
After that, your forge-docker-landing-staging job will have an access to your build artifacts.
Problem #3: Docker is not working!
Without any logs from your CI system, it's impossible to help you, and also violates SO "one question per question" policy. If your other stages are running fine - consider using kaniko instead of docker in docker, since using DinD is actually a security nightmare (you are basically giving root rights on your builder machine to anyone, who can edit .gitlab-ci.yml file). See https://docs.gitlab.com/ee/ci/docker/using_kaniko.html , and in your case something like job below (not tested) should work:
forge-docker-landing-staging:
stage: forge
image:
name: gcr.io/kaniko-project/executor:v1.9.0-debug
entrypoint: [""]
rules:
- if: $CI_COMMIT_BRANCH == "develop"
allow_failure: true
- exists:
- "dist/apps/landing/*"
allow_failure: true
script:
- /kaniko/executor
--context "${CI_PROJECT_DIR}"
--dockerfile "${CI_PROJECT_DIR}/Dockerfile.landing"
--destination "${CI_REGISTRY_IMAGE}:landing:staging"

Docker secret not mounting to default location

I'm using docker-compose to produce a docker image which requires access to a secure Azure Artifacts directory via Paket. As I'm sure at least some people are aware, Paket does not have default compatibility with the Azure Artifacts Credential Provider. To gain the access I need, I'm trying to mount the access token produced by the credential provider as a secret, then consume it using cat within a paket config command. cat then returns an error message stating that the file is not found at the default secret location.
I'm running this code within an Azure Pipeline on the Microsoft-provided ubuntu-latest agent.
Here's the relevant code snippets (It's possible I'm going into too much detail...):
docker-compose.ci.build.yml:
version: '3.6'
services:
ci_build:
build:
context: .
dockerfile: Dockerfile
image: <IMAGE IDENTITY>
secrets:
- azure_credential
secrets:
azure_credential:
file: ./credential.txt
dockerfile:
FROM mcr.microsoft.com/dotnet/sdk:6.0.102-bullseye-slim-amd64 AS build
<LABEL maintainer="<Engineering lead>"
WORKDIR /src
<Various COPY instructions>
RUN dotnet tool restore
RUN dotnet paket restore
RUN --mount=type=secret,id=azure_credential dotnet paket config add-token "<ARTIFACT_FEED_URL>" "$(cat /run/secrets/azure_credential)"
Azure pipeline definition YAML:
jobs:
- job: BuildPublish
displayName: Build & Publish
steps:
- task: PowerShell#2
displayName: pwsh build.ps1
inputs:
filePath: ${{ parameters.workingDirectory }}/.azure-pipelines/build.ps1
pwsh: true
workingDirectory: ${{ parameters.workingDirectory }}
env:
SYSTEM_ACCESSTOKEN: $(System.AccessToken)
The relevant lines of the powershell script initiating docker-compose:
$projectRoot = Split-Path -Path $PSScriptRoot -Parent
Push-Location -Path $projectRoot
try {
...
Out-File -FilePath ./credential.txt -InputObject $Env:SYSTEM_ACCESSTOKEN
...
& docker-compose -f ./docker-compose.ci.build.yml build
...
}
finally {
...
Pop-Location
}
The error message:
0.276 cat: /run/secrets/azure_credential: No such file or directory
If there's other relevant code, let me know.
I tried to verify that the environment variable I'm housing the secret in on the agent even existed and that the value was being saved to the ./credential.txt file for mounting in the image. I verified that the text file was being properly created. I've tried fiddling with the syntax for all the relevant commands--fun fact, Docker docs have two different versions of the mounting syntax, but the other version just crashed. I tried using Windows default pathing in case my source image was a Windows one, but it doesn't appear to be.
Essentially, here's where I've left it: I know that the file ./credential.txt exists and contains some value. I know my mounting syntax is correct, or Docker would crash. The issue appears to be something to do with the default mounting path and/or how docker-compose embeds its secrets.
I figured it out. For reasons I do not understand, the path to the mounted secret has to be defined as an environment variable in the docker-compose YAML. So, like this:
version: '3.6'
services:
ci_build:
build:
context: .
dockerfile: Dockerfile
image: <IMAGE IDENTITY>
secrets:
- azure_credential
environment:
AZURE_CREDENTIAL_FILE: /run/secrets/azure_credential
secrets:
azure_credential:
file: credential.txt
This solved the issue. If anyone knows why this solved the issue, I'd love to hear.

How can I use enviroments variables in Cloud Run whit continuous implementation?

I am using Cloud Run and I want to active the continued implementation whit Github but obviously, I can't upload my env variables so, what can I use
I can't put It when I use "Implement and edit a new version" because it doesn't go to continue, I have to open It click it, and fill the env
I can't use ENV on my Dockerfile because I have to upload it on my Github
I can't use replace it on cloud Build because I am using a Dockerfile and this option is only for cloudbuild.yml (and I don't know how to create it I only know docker :)
Maybe I can edit the yalm on Cloud run I I am not sure if that is a good option
Maybe I can pass if I use gcloud build but I have to click on "Implement and edit a new version" and It is not continuous implementation
My Dockerfile if you want to help me to transform it on a cloudbuild.yml
FROM node:15
WORKDIR /app
COPY package*.json ./
ENV ENV production
ENV PORT 3000
ENV API_URL https://api.mysite.com
RUN npm install --only=production
COPY . .
RUN npm run build
CMD ["npm", "start"]
On google documentation, I found how to create the cloudbuild.yalm to continuous integration
steps:
# Build the container image
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/api:$COMMIT_SHA', '.']
# Push the container image to Container Registry
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/$PROJECT_ID/api:$COMMIT_SHA']
# Deploy container image to Cloud Run
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: gcloud
args:
- 'run'
- 'deploy'
- 'api'
- '--image'
- 'gcr.io/$PROJECT_ID/api:$COMMIT_SHA'
- '--region'
- 'us-east1'
- '--platform'
- 'managed'
images:
- 'gcr.io/$PROJECT_ID/api:$COMMIT_SHA'
You have to change API for the name of your service
After, I put on "Implement and edit a new version" and put the environment variables
And all the continuous implementations going to have the same environments variables that I put when I implement a new version.
You're not passing on any environment variables into the service.
gcloud beta run deploy --help check for --set-env-vars.
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: gcloud
args:
- 'run'
- 'deploy'
- 'api'
- '--image'
- 'gcr.io/$PROJECT_ID/api:$COMMIT_SHA'
- '--region'
- 'us-east1'
- '--platform'
- 'managed'
- '--set-env-vars'
- 'API_URL=${_API_URL}'
You can use substitutions in the build trigger: https://cloud.google.com/build/docs/configuring-builds/substitute-variable-values

Configure bitbucket-pipeline.yml to use a DockerFile from repository to build image when running a pipeline

I am new on creating pipelines on bitbucket to automate building a specific branch after merge.
The project is written in C++ and has the following structure:
PROJECT FOLDER
- .devcontainer/
- devcontainer.json
- bin/
- doc/
- lib/
- src/
- CMakeLists.txt
- ...
- CMakeLists.txt
- clean.sh
- compile.sh
- configure.sh
- DockerFile
- bitbucket-pipelines.yml
We created a DockerFile with all the settings required to build the project. Is there any way I can reference the docker image on bitbucket-pipeline.yml to the DockerFile from the repository?
I have been able to upload the docker image on my docker hub and use it with my credentials by defining:
image:
name: <dockerhubname>/<dockername>
username: $DOCKER_HUB_USERNAME
password: $DOCKER_HUB_PASSWORD
email: $DOCKER_HUB_EMAIL
but I am not sure how to do so bitbucket takes the DockerFile from the repository and uses it to build the image, and if by doing it like this, the build time will increase.
Thanks in advance!
In case you want to build your image during your pipelines process you need the same steps as if your image was built in your machine:
Build your image docker build -t $APP_NAME .
Push it to your repo (e.g. docker hub) docker push $APP_NAME:$VERSION
You can do something like this:
steps:
- step: &build
name: Build Docker Image
services:
- docker
script:
- docker build -t $APP_NAME .
- docker push $APP_NAME:$VERSION
Think that every step in your pipelines runs in a docker container and that allows you to do whatever you want. The docker service allows you to use a out of the box docker client. Then after pushed you can use the image in another step. You just need to specified the image for the step.

Google Cloud Build - Different scopes of Dockerfile and cloudbuild.yaml

I recently asked a question about why I get the error Specified public directory 'dist/browser' does not exist, can't deploy hosting to site PROJECT-ID when I´m trying to deploy to Firebase Hosting in my cloudbuild.yaml. However, since I find the question too bloated with information I tried to break it down.
I created a simple image to visualize what happens when I call gcloud builds submit --config=cloudbuild.yaml. So why can´t I access the directory dist/browser from cloudbuild.yaml even though it is processed after the Dockerfile where the directory dist/browser is created?
Cloud Build is best conceptualized as a series of functions (steps) applied to data in the form of a local file system (often just /workspace as this is a default volume mount added to each step, but you can add other volume mounts) and the Internet.
Output of each function (step) is self-contained unless you explicitly publish data back to one of these two sources (one of the step's volume mounts or the Internet).
In this case docker build consumes local files (not shown in your example) and generates dist/browser in the image that results but this folder is only accessible within that image; nothing is added to e.g. /workspace that you could use in subsequent steps.
In order to use that directory subsequently:
Hack a way to mount the (file system of the) image generated by the step and extract the directory from it (not advised; possible not permitted).
You'd need to run that image as a container and then docker cp files from it back into the Cloud Build's (VM's) file system (perhaps somewhere on /workspace).
Not put the directory in an image in the first place (see below)
Proposal
Instead of docker build'ing an image containing the directory, deconstruct the Dockerfile into a series of Cloud Build steps. This way, the artifacts you want (if written somewhere under one of the step's volume mounts), will be available in subsequent steps:
steps:
- name: gcr.io/cloud-builders/npm
args:
- install
- name: gcr.io/cloud-builders/npm
args:
- run
- build:ssr # Presumably this is where dist/browser is generated?
- name: firebase
args:
- deploy # dist/browser
NOTE Every Cloud Build step has an implicit:
- name: some-step
volumes:
- name: workspace
path: /workspace
Proof
Here's a minimal Cloud Build config that uses a volume called testdir that maps to the Cloud Build VM's /testdir directory.
NOTE The example uses testdir to prove the point. Each Cloud Build step automatically mounts /workspace and this could be used instead.
The config:
Lists the empty /testdir
Creates a file freddie.txt in /testdir
Lists /testdir now containing freddie.txt
options:
# volumes:
# - name: testdir
# path: /testdir
steps:
- name: busybox
volumes:
- name: testdir
path: /testdir
args:
- ash
- -c
- "ls -1a /testdir"
- name: busybox
volumes:
- name: testdir
path: /testdir
args:
- ash
- -c
- 'echo "Hello Freddie" > /testdir/freddie.txt'
- name: busybox
volumes:
- name: testdir
path: /testdir
args:
- ash
- -c
- "ls -1a /testdir"
NOTE Uncommenting volumes under options would remove the need to reproduce the volumes in each step.
The edited output is:
gcloud builds submit \
--config=./cloudbuild.yaml \
--project=${PROJECT}
# Lists (empty) /testdir
Starting Step #0
Step #0: Pulling image: busybox
Step #0: .
Step #0: ..
# Creates /test/freddie.txt
Starting Step #1
Step #1: Already have image: busybox
Finished Step #1
# List /testdir containing freddie.txt
Starting Step #2
Step #2: .
Step #2: ..
Step #2: freddie.txt
Finished Step #2

Resources